00:00:00.001 Started by upstream project "autotest-per-patch" build number 132585 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.144 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.145 The recommended git tool is: git 00:00:00.145 using credential 00000000-0000-0000-0000-000000000002 00:00:00.147 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.234 Fetching changes from the remote Git repository 00:00:00.242 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.290 Using shallow fetch with depth 1 00:00:00.290 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.290 > git --version # timeout=10 00:00:00.322 > git --version # 'git version 2.39.2' 00:00:00.322 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.349 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.349 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.710 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.723 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.738 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.738 > git config core.sparsecheckout # timeout=10 00:00:07.752 > git read-tree -mu HEAD # timeout=10 00:00:07.771 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.794 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.794 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.878 [Pipeline] Start of Pipeline 00:00:07.891 [Pipeline] library 00:00:07.897 Loading library shm_lib@master 00:00:07.897 Library shm_lib@master is cached. Copying from home. 00:00:07.947 [Pipeline] node 00:00:07.963 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.964 [Pipeline] { 00:00:07.973 [Pipeline] catchError 00:00:07.974 [Pipeline] { 00:00:07.984 [Pipeline] wrap 00:00:07.990 [Pipeline] { 00:00:07.996 [Pipeline] stage 00:00:07.997 [Pipeline] { (Prologue) 00:00:08.182 [Pipeline] sh 00:00:08.469 + logger -p user.info -t JENKINS-CI 00:00:08.489 [Pipeline] echo 00:00:08.490 Node: CYP9 00:00:08.498 [Pipeline] sh 00:00:08.806 [Pipeline] setCustomBuildProperty 00:00:08.821 [Pipeline] echo 00:00:08.823 Cleanup processes 00:00:08.829 [Pipeline] sh 00:00:09.120 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.120 1641337 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.137 [Pipeline] sh 00:00:09.428 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.428 ++ grep -v 'sudo pgrep' 00:00:09.428 ++ awk '{print $1}' 00:00:09.428 + sudo kill -9 00:00:09.428 + true 00:00:09.444 [Pipeline] cleanWs 00:00:09.455 [WS-CLEANUP] Deleting project workspace... 00:00:09.455 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.463 [WS-CLEANUP] done 00:00:09.467 [Pipeline] setCustomBuildProperty 00:00:09.482 [Pipeline] sh 00:00:09.766 + sudo git config --global --replace-all safe.directory '*' 00:00:09.863 [Pipeline] httpRequest 00:00:10.337 [Pipeline] echo 00:00:10.339 Sorcerer 10.211.164.101 is alive 00:00:10.348 [Pipeline] retry 00:00:10.350 [Pipeline] { 00:00:10.363 [Pipeline] httpRequest 00:00:10.368 HttpMethod: GET 00:00:10.368 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.369 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.392 Response Code: HTTP/1.1 200 OK 00:00:10.393 Success: Status code 200 is in the accepted range: 200,404 00:00:10.393 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:26.006 [Pipeline] } 00:00:26.027 [Pipeline] // retry 00:00:26.035 [Pipeline] sh 00:00:26.327 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:26.345 [Pipeline] httpRequest 00:00:27.990 [Pipeline] echo 00:00:27.999 Sorcerer 10.211.164.101 is alive 00:00:28.011 [Pipeline] retry 00:00:28.013 [Pipeline] { 00:00:28.021 [Pipeline] httpRequest 00:00:28.024 HttpMethod: GET 00:00:28.025 URL: http://10.211.164.101/packages/spdk_37db29af368a83b43d1e8dbbaedcd1722d2fcba3.tar.gz 00:00:28.026 Sending request to url: http://10.211.164.101/packages/spdk_37db29af368a83b43d1e8dbbaedcd1722d2fcba3.tar.gz 00:00:28.036 Response Code: HTTP/1.1 200 OK 00:00:28.036 Success: Status code 200 is in the accepted range: 200,404 00:00:28.036 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_37db29af368a83b43d1e8dbbaedcd1722d2fcba3.tar.gz 00:02:18.846 [Pipeline] } 00:02:18.864 [Pipeline] // retry 00:02:18.871 [Pipeline] sh 00:02:19.159 + tar --no-same-owner -xf spdk_37db29af368a83b43d1e8dbbaedcd1722d2fcba3.tar.gz 00:02:22.479 [Pipeline] sh 00:02:22.767 + git -C spdk log --oneline -n5 00:02:22.767 37db29af3 lib/reduce: Fix an incorrect chunk map index 00:02:22.767 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:02:22.767 01a2c4855 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:02:22.767 9094b9600 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:02:22.767 2e10c84c8 nvmf: Expose DIF type of namespace to host again 00:02:22.778 [Pipeline] } 00:02:22.792 [Pipeline] // stage 00:02:22.800 [Pipeline] stage 00:02:22.802 [Pipeline] { (Prepare) 00:02:22.817 [Pipeline] writeFile 00:02:22.831 [Pipeline] sh 00:02:23.117 + logger -p user.info -t JENKINS-CI 00:02:23.132 [Pipeline] sh 00:02:23.422 + logger -p user.info -t JENKINS-CI 00:02:23.435 [Pipeline] sh 00:02:23.723 + cat autorun-spdk.conf 00:02:23.723 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:23.723 SPDK_TEST_NVMF=1 00:02:23.723 SPDK_TEST_NVME_CLI=1 00:02:23.723 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:23.723 SPDK_TEST_NVMF_NICS=e810 00:02:23.723 SPDK_TEST_VFIOUSER=1 00:02:23.723 SPDK_RUN_UBSAN=1 00:02:23.723 NET_TYPE=phy 00:02:23.733 RUN_NIGHTLY=0 00:02:23.738 [Pipeline] readFile 00:02:23.762 [Pipeline] withEnv 00:02:23.765 [Pipeline] { 00:02:23.778 [Pipeline] sh 00:02:24.070 + set -ex 00:02:24.070 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:24.070 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:24.070 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:24.070 ++ SPDK_TEST_NVMF=1 00:02:24.070 ++ SPDK_TEST_NVME_CLI=1 00:02:24.070 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:24.070 ++ SPDK_TEST_NVMF_NICS=e810 00:02:24.070 ++ SPDK_TEST_VFIOUSER=1 00:02:24.070 ++ SPDK_RUN_UBSAN=1 00:02:24.070 ++ NET_TYPE=phy 00:02:24.070 ++ RUN_NIGHTLY=0 00:02:24.070 + case $SPDK_TEST_NVMF_NICS in 00:02:24.070 + DRIVERS=ice 00:02:24.070 + [[ tcp == \r\d\m\a ]] 00:02:24.070 + [[ -n ice ]] 00:02:24.070 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:24.070 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:24.070 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:24.070 rmmod: ERROR: Module irdma is not currently loaded 00:02:24.070 rmmod: ERROR: Module i40iw is not currently loaded 00:02:24.070 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:24.070 + true 00:02:24.070 + for D in $DRIVERS 00:02:24.070 + sudo modprobe ice 00:02:24.070 + exit 0 00:02:24.081 [Pipeline] } 00:02:24.095 [Pipeline] // withEnv 00:02:24.101 [Pipeline] } 00:02:24.114 [Pipeline] // stage 00:02:24.125 [Pipeline] catchError 00:02:24.126 [Pipeline] { 00:02:24.141 [Pipeline] timeout 00:02:24.141 Timeout set to expire in 1 hr 0 min 00:02:24.143 [Pipeline] { 00:02:24.158 [Pipeline] stage 00:02:24.160 [Pipeline] { (Tests) 00:02:24.176 [Pipeline] sh 00:02:24.466 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:24.467 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:24.467 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:24.467 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:24.467 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:24.467 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:24.467 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:24.467 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:24.467 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:24.467 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:24.467 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:24.467 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:24.467 + source /etc/os-release 00:02:24.467 ++ NAME='Fedora Linux' 00:02:24.467 ++ VERSION='39 (Cloud Edition)' 00:02:24.467 ++ ID=fedora 00:02:24.467 ++ VERSION_ID=39 00:02:24.467 ++ VERSION_CODENAME= 00:02:24.467 ++ PLATFORM_ID=platform:f39 00:02:24.467 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:24.467 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:24.467 ++ LOGO=fedora-logo-icon 00:02:24.467 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:24.467 ++ HOME_URL=https://fedoraproject.org/ 00:02:24.467 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:24.467 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:24.467 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:24.467 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:24.467 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:24.467 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:24.467 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:24.467 ++ SUPPORT_END=2024-11-12 00:02:24.467 ++ VARIANT='Cloud Edition' 00:02:24.467 ++ VARIANT_ID=cloud 00:02:24.467 + uname -a 00:02:24.467 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:24.467 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:27.765 Hugepages 00:02:27.765 node hugesize free / total 00:02:27.765 node0 1048576kB 0 / 0 00:02:27.765 node0 2048kB 0 / 0 00:02:27.765 node1 1048576kB 0 / 0 00:02:27.765 node1 2048kB 0 / 0 00:02:27.765 00:02:27.765 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:27.765 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:02:27.765 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:02:27.765 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:02:27.765 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:02:27.765 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:02:27.765 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:02:27.765 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:02:27.765 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:02:27.765 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:02:27.765 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:02:27.765 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:02:27.765 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:02:27.765 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:02:27.765 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:02:27.765 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:02:27.765 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:02:27.765 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:02:27.765 + rm -f /tmp/spdk-ld-path 00:02:27.765 + source autorun-spdk.conf 00:02:27.766 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:27.766 ++ SPDK_TEST_NVMF=1 00:02:27.766 ++ SPDK_TEST_NVME_CLI=1 00:02:27.766 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:27.766 ++ SPDK_TEST_NVMF_NICS=e810 00:02:27.766 ++ SPDK_TEST_VFIOUSER=1 00:02:27.766 ++ SPDK_RUN_UBSAN=1 00:02:27.766 ++ NET_TYPE=phy 00:02:27.766 ++ RUN_NIGHTLY=0 00:02:27.766 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:27.766 + [[ -n '' ]] 00:02:27.766 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:27.766 + for M in /var/spdk/build-*-manifest.txt 00:02:27.766 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:27.766 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:27.766 + for M in /var/spdk/build-*-manifest.txt 00:02:27.766 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:27.766 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:27.766 + for M in /var/spdk/build-*-manifest.txt 00:02:27.766 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:27.766 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:27.766 ++ uname 00:02:27.766 + [[ Linux == \L\i\n\u\x ]] 00:02:27.766 + sudo dmesg -T 00:02:27.766 + sudo dmesg --clear 00:02:27.766 + dmesg_pid=1642905 00:02:27.766 + [[ Fedora Linux == FreeBSD ]] 00:02:27.766 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:27.766 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:27.766 + sudo dmesg -Tw 00:02:27.766 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:27.766 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:27.766 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:27.766 + [[ -x /usr/src/fio-static/fio ]] 00:02:27.766 + export FIO_BIN=/usr/src/fio-static/fio 00:02:27.766 + FIO_BIN=/usr/src/fio-static/fio 00:02:27.766 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:27.766 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:27.766 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:27.766 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:27.766 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:27.766 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:27.766 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:27.766 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:27.766 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:28.027 08:01:25 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:28.027 08:01:25 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:28.027 08:01:25 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:28.027 08:01:25 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:28.027 08:01:25 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:28.027 08:01:25 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:28.027 08:01:25 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:28.027 08:01:25 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:28.027 08:01:25 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:28.027 08:01:25 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:28.027 08:01:25 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:28.027 08:01:25 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:28.027 08:01:25 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:28.027 08:01:25 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:28.027 08:01:25 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:28.027 08:01:25 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:28.027 08:01:25 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:28.027 08:01:25 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:28.027 08:01:25 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:28.027 08:01:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.027 08:01:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.027 08:01:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.027 08:01:25 -- paths/export.sh@5 -- $ export PATH 00:02:28.027 08:01:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.027 08:01:25 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:28.027 08:01:25 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:28.027 08:01:25 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732777285.XXXXXX 00:02:28.027 08:01:25 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732777285.fckn9e 00:02:28.027 08:01:25 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:28.027 08:01:25 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:28.027 08:01:25 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:28.027 08:01:25 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:28.027 08:01:25 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:28.027 08:01:25 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:28.027 08:01:25 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:28.027 08:01:25 -- common/autotest_common.sh@10 -- $ set +x 00:02:28.027 08:01:25 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:28.027 08:01:25 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:28.027 08:01:25 -- pm/common@17 -- $ local monitor 00:02:28.027 08:01:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.027 08:01:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.027 08:01:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.027 08:01:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.027 08:01:25 -- pm/common@21 -- $ date +%s 00:02:28.027 08:01:25 -- pm/common@21 -- $ date +%s 00:02:28.027 08:01:25 -- pm/common@25 -- $ sleep 1 00:02:28.027 08:01:25 -- pm/common@21 -- $ date +%s 00:02:28.027 08:01:25 -- pm/common@21 -- $ date +%s 00:02:28.027 08:01:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732777285 00:02:28.027 08:01:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732777285 00:02:28.027 08:01:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732777285 00:02:28.027 08:01:25 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732777285 00:02:28.027 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732777285_collect-cpu-load.pm.log 00:02:28.027 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732777285_collect-vmstat.pm.log 00:02:28.027 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732777285_collect-cpu-temp.pm.log 00:02:28.027 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732777285_collect-bmc-pm.bmc.pm.log 00:02:28.969 08:01:26 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:28.969 08:01:26 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:28.969 08:01:26 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:28.969 08:01:26 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:28.969 08:01:26 -- spdk/autobuild.sh@16 -- $ date -u 00:02:28.969 Thu Nov 28 07:01:26 AM UTC 2024 00:02:28.969 08:01:26 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:28.969 v25.01-pre-277-g37db29af3 00:02:28.969 08:01:26 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:28.969 08:01:26 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:28.969 08:01:26 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:28.969 08:01:26 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:28.969 08:01:26 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:28.969 08:01:26 -- common/autotest_common.sh@10 -- $ set +x 00:02:28.969 ************************************ 00:02:28.969 START TEST ubsan 00:02:28.969 ************************************ 00:02:28.969 08:01:26 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:28.969 using ubsan 00:02:28.969 00:02:28.969 real 0m0.001s 00:02:28.969 user 0m0.001s 00:02:28.969 sys 0m0.000s 00:02:28.969 08:01:26 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:29.230 08:01:26 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:29.230 ************************************ 00:02:29.230 END TEST ubsan 00:02:29.230 ************************************ 00:02:29.230 08:01:26 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:29.230 08:01:26 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:29.230 08:01:26 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:29.230 08:01:26 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:29.230 08:01:26 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:29.230 08:01:26 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:29.230 08:01:26 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:29.230 08:01:26 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:29.230 08:01:26 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:29.230 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:29.230 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:29.801 Using 'verbs' RDMA provider 00:02:45.757 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:57.987 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:58.558 Creating mk/config.mk...done. 00:02:58.558 Creating mk/cc.flags.mk...done. 00:02:58.558 Type 'make' to build. 00:02:58.558 08:01:55 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:02:58.558 08:01:55 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:58.558 08:01:55 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:58.558 08:01:55 -- common/autotest_common.sh@10 -- $ set +x 00:02:58.558 ************************************ 00:02:58.558 START TEST make 00:02:58.558 ************************************ 00:02:58.558 08:01:55 make -- common/autotest_common.sh@1129 -- $ make -j144 00:02:59.130 make[1]: Nothing to be done for 'all'. 00:03:00.518 The Meson build system 00:03:00.518 Version: 1.5.0 00:03:00.518 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:00.518 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:00.518 Build type: native build 00:03:00.518 Project name: libvfio-user 00:03:00.518 Project version: 0.0.1 00:03:00.518 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:00.518 C linker for the host machine: cc ld.bfd 2.40-14 00:03:00.518 Host machine cpu family: x86_64 00:03:00.518 Host machine cpu: x86_64 00:03:00.518 Run-time dependency threads found: YES 00:03:00.518 Library dl found: YES 00:03:00.518 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:00.518 Run-time dependency json-c found: YES 0.17 00:03:00.518 Run-time dependency cmocka found: YES 1.1.7 00:03:00.518 Program pytest-3 found: NO 00:03:00.518 Program flake8 found: NO 00:03:00.518 Program misspell-fixer found: NO 00:03:00.518 Program restructuredtext-lint found: NO 00:03:00.518 Program valgrind found: YES (/usr/bin/valgrind) 00:03:00.518 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:00.518 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:00.518 Compiler for C supports arguments -Wwrite-strings: YES 00:03:00.518 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:00.519 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:00.519 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:00.519 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:00.519 Build targets in project: 8 00:03:00.519 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:00.519 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:00.519 00:03:00.519 libvfio-user 0.0.1 00:03:00.519 00:03:00.519 User defined options 00:03:00.519 buildtype : debug 00:03:00.519 default_library: shared 00:03:00.519 libdir : /usr/local/lib 00:03:00.519 00:03:00.519 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:00.779 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:01.039 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:01.040 [2/37] Compiling C object samples/null.p/null.c.o 00:03:01.040 [3/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:01.040 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:01.040 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:01.040 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:01.040 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:01.040 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:01.040 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:01.040 [10/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:01.040 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:01.040 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:01.040 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:01.040 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:01.040 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:01.040 [16/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:01.040 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:01.040 [18/37] Compiling C object samples/server.p/server.c.o 00:03:01.040 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:01.040 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:01.040 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:01.040 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:01.040 [23/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:01.040 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:01.040 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:01.040 [26/37] Compiling C object samples/client.p/client.c.o 00:03:01.040 [27/37] Linking target samples/client 00:03:01.040 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:01.040 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:01.040 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:03:01.301 [31/37] Linking target test/unit_tests 00:03:01.301 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:01.301 [33/37] Linking target samples/shadow_ioeventfd_server 00:03:01.301 [34/37] Linking target samples/null 00:03:01.301 [35/37] Linking target samples/server 00:03:01.301 [36/37] Linking target samples/gpio-pci-idio-16 00:03:01.301 [37/37] Linking target samples/lspci 00:03:01.301 INFO: autodetecting backend as ninja 00:03:01.301 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:01.301 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:01.875 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:01.875 ninja: no work to do. 00:03:08.469 The Meson build system 00:03:08.469 Version: 1.5.0 00:03:08.469 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:03:08.469 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:03:08.469 Build type: native build 00:03:08.469 Program cat found: YES (/usr/bin/cat) 00:03:08.469 Project name: DPDK 00:03:08.469 Project version: 24.03.0 00:03:08.469 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:08.469 C linker for the host machine: cc ld.bfd 2.40-14 00:03:08.469 Host machine cpu family: x86_64 00:03:08.469 Host machine cpu: x86_64 00:03:08.469 Message: ## Building in Developer Mode ## 00:03:08.469 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:08.469 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:08.469 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:08.469 Program python3 found: YES (/usr/bin/python3) 00:03:08.469 Program cat found: YES (/usr/bin/cat) 00:03:08.469 Compiler for C supports arguments -march=native: YES 00:03:08.469 Checking for size of "void *" : 8 00:03:08.469 Checking for size of "void *" : 8 (cached) 00:03:08.469 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:08.469 Library m found: YES 00:03:08.469 Library numa found: YES 00:03:08.469 Has header "numaif.h" : YES 00:03:08.469 Library fdt found: NO 00:03:08.469 Library execinfo found: NO 00:03:08.469 Has header "execinfo.h" : YES 00:03:08.469 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:08.469 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:08.469 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:08.469 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:08.469 Run-time dependency openssl found: YES 3.1.1 00:03:08.469 Run-time dependency libpcap found: YES 1.10.4 00:03:08.469 Has header "pcap.h" with dependency libpcap: YES 00:03:08.469 Compiler for C supports arguments -Wcast-qual: YES 00:03:08.469 Compiler for C supports arguments -Wdeprecated: YES 00:03:08.469 Compiler for C supports arguments -Wformat: YES 00:03:08.469 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:08.469 Compiler for C supports arguments -Wformat-security: NO 00:03:08.469 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:08.469 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:08.469 Compiler for C supports arguments -Wnested-externs: YES 00:03:08.469 Compiler for C supports arguments -Wold-style-definition: YES 00:03:08.469 Compiler for C supports arguments -Wpointer-arith: YES 00:03:08.469 Compiler for C supports arguments -Wsign-compare: YES 00:03:08.469 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:08.469 Compiler for C supports arguments -Wundef: YES 00:03:08.469 Compiler for C supports arguments -Wwrite-strings: YES 00:03:08.469 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:08.469 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:08.469 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:08.469 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:08.469 Program objdump found: YES (/usr/bin/objdump) 00:03:08.469 Compiler for C supports arguments -mavx512f: YES 00:03:08.469 Checking if "AVX512 checking" compiles: YES 00:03:08.469 Fetching value of define "__SSE4_2__" : 1 00:03:08.469 Fetching value of define "__AES__" : 1 00:03:08.469 Fetching value of define "__AVX__" : 1 00:03:08.469 Fetching value of define "__AVX2__" : 1 00:03:08.469 Fetching value of define "__AVX512BW__" : 1 00:03:08.469 Fetching value of define "__AVX512CD__" : 1 00:03:08.469 Fetching value of define "__AVX512DQ__" : 1 00:03:08.469 Fetching value of define "__AVX512F__" : 1 00:03:08.469 Fetching value of define "__AVX512VL__" : 1 00:03:08.469 Fetching value of define "__PCLMUL__" : 1 00:03:08.469 Fetching value of define "__RDRND__" : 1 00:03:08.469 Fetching value of define "__RDSEED__" : 1 00:03:08.469 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:08.469 Fetching value of define "__znver1__" : (undefined) 00:03:08.469 Fetching value of define "__znver2__" : (undefined) 00:03:08.469 Fetching value of define "__znver3__" : (undefined) 00:03:08.469 Fetching value of define "__znver4__" : (undefined) 00:03:08.469 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:08.469 Message: lib/log: Defining dependency "log" 00:03:08.469 Message: lib/kvargs: Defining dependency "kvargs" 00:03:08.469 Message: lib/telemetry: Defining dependency "telemetry" 00:03:08.469 Checking for function "getentropy" : NO 00:03:08.469 Message: lib/eal: Defining dependency "eal" 00:03:08.469 Message: lib/ring: Defining dependency "ring" 00:03:08.469 Message: lib/rcu: Defining dependency "rcu" 00:03:08.469 Message: lib/mempool: Defining dependency "mempool" 00:03:08.469 Message: lib/mbuf: Defining dependency "mbuf" 00:03:08.469 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:08.469 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:08.469 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:08.469 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:08.469 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:08.469 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:08.469 Compiler for C supports arguments -mpclmul: YES 00:03:08.469 Compiler for C supports arguments -maes: YES 00:03:08.469 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:08.469 Compiler for C supports arguments -mavx512bw: YES 00:03:08.469 Compiler for C supports arguments -mavx512dq: YES 00:03:08.469 Compiler for C supports arguments -mavx512vl: YES 00:03:08.469 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:08.469 Compiler for C supports arguments -mavx2: YES 00:03:08.469 Compiler for C supports arguments -mavx: YES 00:03:08.469 Message: lib/net: Defining dependency "net" 00:03:08.469 Message: lib/meter: Defining dependency "meter" 00:03:08.469 Message: lib/ethdev: Defining dependency "ethdev" 00:03:08.469 Message: lib/pci: Defining dependency "pci" 00:03:08.469 Message: lib/cmdline: Defining dependency "cmdline" 00:03:08.469 Message: lib/hash: Defining dependency "hash" 00:03:08.469 Message: lib/timer: Defining dependency "timer" 00:03:08.469 Message: lib/compressdev: Defining dependency "compressdev" 00:03:08.469 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:08.469 Message: lib/dmadev: Defining dependency "dmadev" 00:03:08.469 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:08.469 Message: lib/power: Defining dependency "power" 00:03:08.469 Message: lib/reorder: Defining dependency "reorder" 00:03:08.470 Message: lib/security: Defining dependency "security" 00:03:08.470 Has header "linux/userfaultfd.h" : YES 00:03:08.470 Has header "linux/vduse.h" : YES 00:03:08.470 Message: lib/vhost: Defining dependency "vhost" 00:03:08.470 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:08.470 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:08.470 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:08.470 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:08.470 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:08.470 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:08.470 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:08.470 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:08.470 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:08.470 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:08.470 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:08.470 Configuring doxy-api-html.conf using configuration 00:03:08.470 Configuring doxy-api-man.conf using configuration 00:03:08.470 Program mandb found: YES (/usr/bin/mandb) 00:03:08.470 Program sphinx-build found: NO 00:03:08.470 Configuring rte_build_config.h using configuration 00:03:08.470 Message: 00:03:08.470 ================= 00:03:08.470 Applications Enabled 00:03:08.470 ================= 00:03:08.470 00:03:08.470 apps: 00:03:08.470 00:03:08.470 00:03:08.470 Message: 00:03:08.470 ================= 00:03:08.470 Libraries Enabled 00:03:08.470 ================= 00:03:08.470 00:03:08.470 libs: 00:03:08.470 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:08.470 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:08.470 cryptodev, dmadev, power, reorder, security, vhost, 00:03:08.470 00:03:08.470 Message: 00:03:08.470 =============== 00:03:08.470 Drivers Enabled 00:03:08.470 =============== 00:03:08.470 00:03:08.470 common: 00:03:08.470 00:03:08.470 bus: 00:03:08.470 pci, vdev, 00:03:08.470 mempool: 00:03:08.470 ring, 00:03:08.470 dma: 00:03:08.470 00:03:08.470 net: 00:03:08.470 00:03:08.470 crypto: 00:03:08.470 00:03:08.470 compress: 00:03:08.470 00:03:08.470 vdpa: 00:03:08.470 00:03:08.470 00:03:08.470 Message: 00:03:08.470 ================= 00:03:08.470 Content Skipped 00:03:08.470 ================= 00:03:08.470 00:03:08.470 apps: 00:03:08.470 dumpcap: explicitly disabled via build config 00:03:08.470 graph: explicitly disabled via build config 00:03:08.470 pdump: explicitly disabled via build config 00:03:08.470 proc-info: explicitly disabled via build config 00:03:08.470 test-acl: explicitly disabled via build config 00:03:08.470 test-bbdev: explicitly disabled via build config 00:03:08.470 test-cmdline: explicitly disabled via build config 00:03:08.470 test-compress-perf: explicitly disabled via build config 00:03:08.470 test-crypto-perf: explicitly disabled via build config 00:03:08.470 test-dma-perf: explicitly disabled via build config 00:03:08.470 test-eventdev: explicitly disabled via build config 00:03:08.470 test-fib: explicitly disabled via build config 00:03:08.470 test-flow-perf: explicitly disabled via build config 00:03:08.470 test-gpudev: explicitly disabled via build config 00:03:08.470 test-mldev: explicitly disabled via build config 00:03:08.470 test-pipeline: explicitly disabled via build config 00:03:08.470 test-pmd: explicitly disabled via build config 00:03:08.470 test-regex: explicitly disabled via build config 00:03:08.470 test-sad: explicitly disabled via build config 00:03:08.470 test-security-perf: explicitly disabled via build config 00:03:08.470 00:03:08.470 libs: 00:03:08.470 argparse: explicitly disabled via build config 00:03:08.470 metrics: explicitly disabled via build config 00:03:08.470 acl: explicitly disabled via build config 00:03:08.470 bbdev: explicitly disabled via build config 00:03:08.470 bitratestats: explicitly disabled via build config 00:03:08.470 bpf: explicitly disabled via build config 00:03:08.470 cfgfile: explicitly disabled via build config 00:03:08.470 distributor: explicitly disabled via build config 00:03:08.470 efd: explicitly disabled via build config 00:03:08.470 eventdev: explicitly disabled via build config 00:03:08.470 dispatcher: explicitly disabled via build config 00:03:08.470 gpudev: explicitly disabled via build config 00:03:08.470 gro: explicitly disabled via build config 00:03:08.470 gso: explicitly disabled via build config 00:03:08.470 ip_frag: explicitly disabled via build config 00:03:08.470 jobstats: explicitly disabled via build config 00:03:08.470 latencystats: explicitly disabled via build config 00:03:08.470 lpm: explicitly disabled via build config 00:03:08.470 member: explicitly disabled via build config 00:03:08.470 pcapng: explicitly disabled via build config 00:03:08.470 rawdev: explicitly disabled via build config 00:03:08.470 regexdev: explicitly disabled via build config 00:03:08.470 mldev: explicitly disabled via build config 00:03:08.470 rib: explicitly disabled via build config 00:03:08.470 sched: explicitly disabled via build config 00:03:08.470 stack: explicitly disabled via build config 00:03:08.470 ipsec: explicitly disabled via build config 00:03:08.470 pdcp: explicitly disabled via build config 00:03:08.470 fib: explicitly disabled via build config 00:03:08.470 port: explicitly disabled via build config 00:03:08.470 pdump: explicitly disabled via build config 00:03:08.470 table: explicitly disabled via build config 00:03:08.470 pipeline: explicitly disabled via build config 00:03:08.470 graph: explicitly disabled via build config 00:03:08.470 node: explicitly disabled via build config 00:03:08.470 00:03:08.470 drivers: 00:03:08.470 common/cpt: not in enabled drivers build config 00:03:08.470 common/dpaax: not in enabled drivers build config 00:03:08.470 common/iavf: not in enabled drivers build config 00:03:08.470 common/idpf: not in enabled drivers build config 00:03:08.470 common/ionic: not in enabled drivers build config 00:03:08.470 common/mvep: not in enabled drivers build config 00:03:08.470 common/octeontx: not in enabled drivers build config 00:03:08.470 bus/auxiliary: not in enabled drivers build config 00:03:08.470 bus/cdx: not in enabled drivers build config 00:03:08.470 bus/dpaa: not in enabled drivers build config 00:03:08.470 bus/fslmc: not in enabled drivers build config 00:03:08.470 bus/ifpga: not in enabled drivers build config 00:03:08.470 bus/platform: not in enabled drivers build config 00:03:08.470 bus/uacce: not in enabled drivers build config 00:03:08.470 bus/vmbus: not in enabled drivers build config 00:03:08.470 common/cnxk: not in enabled drivers build config 00:03:08.470 common/mlx5: not in enabled drivers build config 00:03:08.470 common/nfp: not in enabled drivers build config 00:03:08.470 common/nitrox: not in enabled drivers build config 00:03:08.470 common/qat: not in enabled drivers build config 00:03:08.470 common/sfc_efx: not in enabled drivers build config 00:03:08.470 mempool/bucket: not in enabled drivers build config 00:03:08.470 mempool/cnxk: not in enabled drivers build config 00:03:08.470 mempool/dpaa: not in enabled drivers build config 00:03:08.470 mempool/dpaa2: not in enabled drivers build config 00:03:08.470 mempool/octeontx: not in enabled drivers build config 00:03:08.470 mempool/stack: not in enabled drivers build config 00:03:08.470 dma/cnxk: not in enabled drivers build config 00:03:08.470 dma/dpaa: not in enabled drivers build config 00:03:08.470 dma/dpaa2: not in enabled drivers build config 00:03:08.470 dma/hisilicon: not in enabled drivers build config 00:03:08.470 dma/idxd: not in enabled drivers build config 00:03:08.470 dma/ioat: not in enabled drivers build config 00:03:08.470 dma/skeleton: not in enabled drivers build config 00:03:08.470 net/af_packet: not in enabled drivers build config 00:03:08.470 net/af_xdp: not in enabled drivers build config 00:03:08.470 net/ark: not in enabled drivers build config 00:03:08.470 net/atlantic: not in enabled drivers build config 00:03:08.470 net/avp: not in enabled drivers build config 00:03:08.470 net/axgbe: not in enabled drivers build config 00:03:08.470 net/bnx2x: not in enabled drivers build config 00:03:08.470 net/bnxt: not in enabled drivers build config 00:03:08.470 net/bonding: not in enabled drivers build config 00:03:08.470 net/cnxk: not in enabled drivers build config 00:03:08.470 net/cpfl: not in enabled drivers build config 00:03:08.470 net/cxgbe: not in enabled drivers build config 00:03:08.470 net/dpaa: not in enabled drivers build config 00:03:08.470 net/dpaa2: not in enabled drivers build config 00:03:08.470 net/e1000: not in enabled drivers build config 00:03:08.470 net/ena: not in enabled drivers build config 00:03:08.470 net/enetc: not in enabled drivers build config 00:03:08.470 net/enetfec: not in enabled drivers build config 00:03:08.470 net/enic: not in enabled drivers build config 00:03:08.470 net/failsafe: not in enabled drivers build config 00:03:08.470 net/fm10k: not in enabled drivers build config 00:03:08.470 net/gve: not in enabled drivers build config 00:03:08.470 net/hinic: not in enabled drivers build config 00:03:08.470 net/hns3: not in enabled drivers build config 00:03:08.470 net/i40e: not in enabled drivers build config 00:03:08.470 net/iavf: not in enabled drivers build config 00:03:08.470 net/ice: not in enabled drivers build config 00:03:08.470 net/idpf: not in enabled drivers build config 00:03:08.470 net/igc: not in enabled drivers build config 00:03:08.470 net/ionic: not in enabled drivers build config 00:03:08.470 net/ipn3ke: not in enabled drivers build config 00:03:08.470 net/ixgbe: not in enabled drivers build config 00:03:08.470 net/mana: not in enabled drivers build config 00:03:08.470 net/memif: not in enabled drivers build config 00:03:08.470 net/mlx4: not in enabled drivers build config 00:03:08.470 net/mlx5: not in enabled drivers build config 00:03:08.470 net/mvneta: not in enabled drivers build config 00:03:08.470 net/mvpp2: not in enabled drivers build config 00:03:08.470 net/netvsc: not in enabled drivers build config 00:03:08.470 net/nfb: not in enabled drivers build config 00:03:08.470 net/nfp: not in enabled drivers build config 00:03:08.470 net/ngbe: not in enabled drivers build config 00:03:08.470 net/null: not in enabled drivers build config 00:03:08.470 net/octeontx: not in enabled drivers build config 00:03:08.470 net/octeon_ep: not in enabled drivers build config 00:03:08.470 net/pcap: not in enabled drivers build config 00:03:08.471 net/pfe: not in enabled drivers build config 00:03:08.471 net/qede: not in enabled drivers build config 00:03:08.471 net/ring: not in enabled drivers build config 00:03:08.471 net/sfc: not in enabled drivers build config 00:03:08.471 net/softnic: not in enabled drivers build config 00:03:08.471 net/tap: not in enabled drivers build config 00:03:08.471 net/thunderx: not in enabled drivers build config 00:03:08.471 net/txgbe: not in enabled drivers build config 00:03:08.471 net/vdev_netvsc: not in enabled drivers build config 00:03:08.471 net/vhost: not in enabled drivers build config 00:03:08.471 net/virtio: not in enabled drivers build config 00:03:08.471 net/vmxnet3: not in enabled drivers build config 00:03:08.471 raw/*: missing internal dependency, "rawdev" 00:03:08.471 crypto/armv8: not in enabled drivers build config 00:03:08.471 crypto/bcmfs: not in enabled drivers build config 00:03:08.471 crypto/caam_jr: not in enabled drivers build config 00:03:08.471 crypto/ccp: not in enabled drivers build config 00:03:08.471 crypto/cnxk: not in enabled drivers build config 00:03:08.471 crypto/dpaa_sec: not in enabled drivers build config 00:03:08.471 crypto/dpaa2_sec: not in enabled drivers build config 00:03:08.471 crypto/ipsec_mb: not in enabled drivers build config 00:03:08.471 crypto/mlx5: not in enabled drivers build config 00:03:08.471 crypto/mvsam: not in enabled drivers build config 00:03:08.471 crypto/nitrox: not in enabled drivers build config 00:03:08.471 crypto/null: not in enabled drivers build config 00:03:08.471 crypto/octeontx: not in enabled drivers build config 00:03:08.471 crypto/openssl: not in enabled drivers build config 00:03:08.471 crypto/scheduler: not in enabled drivers build config 00:03:08.471 crypto/uadk: not in enabled drivers build config 00:03:08.471 crypto/virtio: not in enabled drivers build config 00:03:08.471 compress/isal: not in enabled drivers build config 00:03:08.471 compress/mlx5: not in enabled drivers build config 00:03:08.471 compress/nitrox: not in enabled drivers build config 00:03:08.471 compress/octeontx: not in enabled drivers build config 00:03:08.471 compress/zlib: not in enabled drivers build config 00:03:08.471 regex/*: missing internal dependency, "regexdev" 00:03:08.471 ml/*: missing internal dependency, "mldev" 00:03:08.471 vdpa/ifc: not in enabled drivers build config 00:03:08.471 vdpa/mlx5: not in enabled drivers build config 00:03:08.471 vdpa/nfp: not in enabled drivers build config 00:03:08.471 vdpa/sfc: not in enabled drivers build config 00:03:08.471 event/*: missing internal dependency, "eventdev" 00:03:08.471 baseband/*: missing internal dependency, "bbdev" 00:03:08.471 gpu/*: missing internal dependency, "gpudev" 00:03:08.471 00:03:08.471 00:03:08.471 Build targets in project: 84 00:03:08.471 00:03:08.471 DPDK 24.03.0 00:03:08.471 00:03:08.471 User defined options 00:03:08.471 buildtype : debug 00:03:08.471 default_library : shared 00:03:08.471 libdir : lib 00:03:08.471 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:08.471 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:08.471 c_link_args : 00:03:08.471 cpu_instruction_set: native 00:03:08.471 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:03:08.471 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:03:08.471 enable_docs : false 00:03:08.471 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:08.471 enable_kmods : false 00:03:08.471 max_lcores : 128 00:03:08.471 tests : false 00:03:08.471 00:03:08.471 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:08.471 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:03:08.471 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:08.471 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:08.471 [3/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:08.471 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:08.471 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:08.471 [6/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:08.471 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:08.471 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:08.471 [9/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:08.471 [10/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:08.471 [11/267] Linking static target lib/librte_kvargs.a 00:03:08.471 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:08.471 [13/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:08.471 [14/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:08.471 [15/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:08.471 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:08.471 [17/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:08.471 [18/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:08.471 [19/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:08.471 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:08.471 [21/267] Linking static target lib/librte_log.a 00:03:08.471 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:08.471 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:08.471 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:08.471 [25/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:08.471 [26/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:08.471 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:08.471 [28/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:08.471 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:08.471 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:08.471 [31/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:08.471 [32/267] Linking static target lib/librte_pci.a 00:03:08.729 [33/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:08.729 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:08.729 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:08.729 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:08.729 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:08.729 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:08.730 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:08.730 [40/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.730 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:08.989 [42/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.989 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:08.989 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:08.989 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:08.989 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:08.989 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:08.989 [48/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:08.989 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:08.989 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:08.989 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:08.989 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:08.989 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:08.989 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:08.989 [55/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:08.989 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:08.989 [57/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:08.989 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:08.989 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:08.989 [60/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:08.989 [61/267] Linking static target lib/librte_meter.a 00:03:08.989 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:08.989 [63/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:08.989 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:08.989 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:08.989 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:08.989 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:08.989 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:08.989 [69/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:08.989 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:08.989 [71/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:08.989 [72/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:08.989 [73/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:08.989 [74/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:08.989 [75/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:08.989 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:08.989 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:08.989 [78/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:08.989 [79/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:08.989 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:08.989 [81/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:08.989 [82/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:08.989 [83/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:08.989 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:08.989 [85/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:08.989 [86/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:08.989 [87/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:08.989 [88/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:08.989 [89/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:08.989 [90/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:08.989 [91/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:08.989 [92/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:08.989 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:08.989 [94/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:08.989 [95/267] Linking static target lib/librte_ring.a 00:03:08.989 [96/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:08.989 [97/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:08.989 [98/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:08.989 [99/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:08.989 [100/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:08.989 [101/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:08.989 [102/267] Linking static target lib/librte_telemetry.a 00:03:08.989 [103/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:08.989 [104/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:08.989 [105/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:08.989 [106/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:08.989 [107/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:08.989 [108/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:08.989 [109/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:08.989 [110/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:08.989 [111/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:08.989 [112/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:08.989 [113/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:08.989 [114/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:08.989 [115/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:08.989 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:08.989 [117/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:08.990 [118/267] Linking static target lib/librte_cmdline.a 00:03:08.990 [119/267] Linking static target lib/librte_timer.a 00:03:08.990 [120/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:08.990 [121/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:08.990 [122/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:08.990 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:08.990 [124/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:08.990 [125/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:08.990 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:08.990 [127/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:08.990 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:08.990 [129/267] Linking static target lib/librte_net.a 00:03:08.990 [130/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:08.990 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:08.990 [132/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:08.990 [133/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:08.990 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:08.990 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:08.990 [136/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:08.990 [137/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:08.990 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:08.990 [139/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:08.990 [140/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:08.990 [141/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:08.990 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:08.990 [143/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:08.990 [144/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:08.990 [145/267] Linking static target lib/librte_mempool.a 00:03:08.990 [146/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:08.990 [147/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:08.990 [148/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:08.990 [149/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:08.990 [150/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:09.251 [151/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:09.251 [152/267] Linking static target lib/librte_power.a 00:03:09.251 [153/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:09.251 [154/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:09.251 [155/267] Linking static target lib/librte_dmadev.a 00:03:09.251 [156/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:09.251 [157/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.251 [158/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:09.251 [159/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:09.251 [160/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:09.251 [161/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:09.251 [162/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:09.251 [163/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:09.251 [164/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:09.251 [165/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:09.251 [166/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:09.251 [167/267] Linking static target lib/librte_eal.a 00:03:09.251 [168/267] Linking static target lib/librte_rcu.a 00:03:09.251 [169/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:09.251 [170/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:09.251 [171/267] Linking static target lib/librte_compressdev.a 00:03:09.251 [172/267] Linking static target lib/librte_reorder.a 00:03:09.251 [173/267] Linking target lib/librte_log.so.24.1 00:03:09.251 [174/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:09.252 [175/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.252 [176/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:09.252 [177/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:09.252 [178/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:09.252 [179/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:09.252 [180/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:09.252 [181/267] Linking static target lib/librte_security.a 00:03:09.252 [182/267] Linking static target lib/librte_mbuf.a 00:03:09.252 [183/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:09.252 [184/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:09.252 [185/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:09.252 [186/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:09.252 [187/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:09.252 [188/267] Linking static target drivers/librte_bus_vdev.a 00:03:09.252 [189/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:09.252 [190/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.252 [191/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:09.252 [192/267] Linking target lib/librte_kvargs.so.24.1 00:03:09.513 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:09.513 [194/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:09.513 [195/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:09.513 [196/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:09.513 [197/267] Linking static target lib/librte_hash.a 00:03:09.513 [198/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:09.513 [199/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.513 [200/267] Linking static target drivers/librte_bus_pci.a 00:03:09.513 [201/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:09.513 [202/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:09.513 [203/267] Linking static target drivers/librte_mempool_ring.a 00:03:09.513 [204/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:09.513 [205/267] Linking static target lib/librte_cryptodev.a 00:03:09.513 [206/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:09.513 [207/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.513 [208/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:09.513 [209/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.513 [210/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.776 [211/267] Linking target lib/librte_telemetry.so.24.1 00:03:09.776 [212/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.776 [213/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.776 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:10.038 [215/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.038 [216/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.038 [217/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:10.038 [218/267] Linking static target lib/librte_ethdev.a 00:03:10.038 [219/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:10.038 [220/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.038 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.299 [222/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.299 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.299 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.299 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.561 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.133 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:11.133 [228/267] Linking static target lib/librte_vhost.a 00:03:11.704 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.619 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.205 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.775 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.775 [233/267] Linking target lib/librte_eal.so.24.1 00:03:20.775 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:21.035 [235/267] Linking target lib/librte_ring.so.24.1 00:03:21.035 [236/267] Linking target lib/librte_meter.so.24.1 00:03:21.035 [237/267] Linking target lib/librte_pci.so.24.1 00:03:21.035 [238/267] Linking target lib/librte_dmadev.so.24.1 00:03:21.035 [239/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:21.035 [240/267] Linking target lib/librte_timer.so.24.1 00:03:21.035 [241/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:21.035 [242/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:21.035 [243/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:21.035 [244/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:21.035 [245/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:21.035 [246/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:21.035 [247/267] Linking target lib/librte_mempool.so.24.1 00:03:21.036 [248/267] Linking target lib/librte_rcu.so.24.1 00:03:21.296 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:21.296 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:21.296 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:21.296 [252/267] Linking target lib/librte_mbuf.so.24.1 00:03:21.557 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:21.557 [254/267] Linking target lib/librte_compressdev.so.24.1 00:03:21.557 [255/267] Linking target lib/librte_reorder.so.24.1 00:03:21.557 [256/267] Linking target lib/librte_net.so.24.1 00:03:21.557 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:03:21.557 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:21.557 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:21.557 [260/267] Linking target lib/librte_hash.so.24.1 00:03:21.557 [261/267] Linking target lib/librte_cmdline.so.24.1 00:03:21.557 [262/267] Linking target lib/librte_security.so.24.1 00:03:21.816 [263/267] Linking target lib/librte_ethdev.so.24.1 00:03:21.816 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:21.816 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:21.816 [266/267] Linking target lib/librte_power.so.24.1 00:03:21.816 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:21.816 INFO: autodetecting backend as ninja 00:03:21.816 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:03:26.021 CC lib/log/log.o 00:03:26.021 CC lib/log/log_flags.o 00:03:26.021 CC lib/ut_mock/mock.o 00:03:26.021 CC lib/ut/ut.o 00:03:26.021 CC lib/log/log_deprecated.o 00:03:26.021 LIB libspdk_log.a 00:03:26.021 LIB libspdk_ut.a 00:03:26.021 LIB libspdk_ut_mock.a 00:03:26.021 SO libspdk_log.so.7.1 00:03:26.021 SO libspdk_ut.so.2.0 00:03:26.021 SO libspdk_ut_mock.so.6.0 00:03:26.021 SYMLINK libspdk_log.so 00:03:26.021 SYMLINK libspdk_ut_mock.so 00:03:26.021 SYMLINK libspdk_ut.so 00:03:26.021 CC lib/util/base64.o 00:03:26.021 CC lib/util/bit_array.o 00:03:26.021 CC lib/util/cpuset.o 00:03:26.021 CC lib/util/crc16.o 00:03:26.021 CC lib/util/crc32.o 00:03:26.021 CC lib/util/crc32c.o 00:03:26.021 CXX lib/trace_parser/trace.o 00:03:26.021 CC lib/util/crc32_ieee.o 00:03:26.021 CC lib/util/crc64.o 00:03:26.021 CC lib/util/dif.o 00:03:26.021 CC lib/ioat/ioat.o 00:03:26.021 CC lib/dma/dma.o 00:03:26.021 CC lib/util/fd.o 00:03:26.021 CC lib/util/fd_group.o 00:03:26.021 CC lib/util/file.o 00:03:26.021 CC lib/util/hexlify.o 00:03:26.021 CC lib/util/iov.o 00:03:26.021 CC lib/util/math.o 00:03:26.021 CC lib/util/net.o 00:03:26.021 CC lib/util/pipe.o 00:03:26.021 CC lib/util/strerror_tls.o 00:03:26.021 CC lib/util/string.o 00:03:26.021 CC lib/util/uuid.o 00:03:26.021 CC lib/util/xor.o 00:03:26.021 CC lib/util/zipf.o 00:03:26.021 CC lib/util/md5.o 00:03:26.281 CC lib/vfio_user/host/vfio_user_pci.o 00:03:26.281 CC lib/vfio_user/host/vfio_user.o 00:03:26.281 LIB libspdk_dma.a 00:03:26.281 SO libspdk_dma.so.5.0 00:03:26.281 LIB libspdk_ioat.a 00:03:26.281 SYMLINK libspdk_dma.so 00:03:26.281 SO libspdk_ioat.so.7.0 00:03:26.281 SYMLINK libspdk_ioat.so 00:03:26.281 LIB libspdk_vfio_user.a 00:03:26.542 SO libspdk_vfio_user.so.5.0 00:03:26.542 SYMLINK libspdk_vfio_user.so 00:03:26.542 LIB libspdk_util.a 00:03:26.542 SO libspdk_util.so.10.1 00:03:26.542 LIB libspdk_trace_parser.a 00:03:26.542 SO libspdk_trace_parser.so.6.0 00:03:26.803 SYMLINK libspdk_util.so 00:03:26.803 SYMLINK libspdk_trace_parser.so 00:03:27.064 CC lib/json/json_parse.o 00:03:27.064 CC lib/json/json_util.o 00:03:27.064 CC lib/env_dpdk/env.o 00:03:27.064 CC lib/json/json_write.o 00:03:27.064 CC lib/env_dpdk/memory.o 00:03:27.064 CC lib/env_dpdk/pci.o 00:03:27.064 CC lib/conf/conf.o 00:03:27.064 CC lib/env_dpdk/init.o 00:03:27.064 CC lib/vmd/vmd.o 00:03:27.064 CC lib/rdma_utils/rdma_utils.o 00:03:27.064 CC lib/env_dpdk/threads.o 00:03:27.064 CC lib/env_dpdk/pci_ioat.o 00:03:27.064 CC lib/env_dpdk/pci_virtio.o 00:03:27.064 CC lib/vmd/led.o 00:03:27.064 CC lib/idxd/idxd.o 00:03:27.064 CC lib/idxd/idxd_user.o 00:03:27.064 CC lib/env_dpdk/pci_vmd.o 00:03:27.064 CC lib/env_dpdk/pci_idxd.o 00:03:27.064 CC lib/idxd/idxd_kernel.o 00:03:27.064 CC lib/env_dpdk/pci_event.o 00:03:27.064 CC lib/env_dpdk/sigbus_handler.o 00:03:27.064 CC lib/env_dpdk/pci_dpdk.o 00:03:27.064 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:27.064 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:27.326 LIB libspdk_conf.a 00:03:27.326 SO libspdk_conf.so.6.0 00:03:27.326 LIB libspdk_rdma_utils.a 00:03:27.326 LIB libspdk_json.a 00:03:27.326 SO libspdk_rdma_utils.so.1.0 00:03:27.587 SYMLINK libspdk_conf.so 00:03:27.587 SO libspdk_json.so.6.0 00:03:27.587 SYMLINK libspdk_rdma_utils.so 00:03:27.587 SYMLINK libspdk_json.so 00:03:27.587 LIB libspdk_idxd.a 00:03:27.587 LIB libspdk_vmd.a 00:03:27.587 SO libspdk_idxd.so.12.1 00:03:27.587 SO libspdk_vmd.so.6.0 00:03:27.847 SYMLINK libspdk_idxd.so 00:03:27.847 SYMLINK libspdk_vmd.so 00:03:27.847 CC lib/rdma_provider/common.o 00:03:27.847 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:27.847 CC lib/jsonrpc/jsonrpc_server.o 00:03:27.847 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:27.847 CC lib/jsonrpc/jsonrpc_client.o 00:03:27.847 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:28.108 LIB libspdk_env_dpdk.a 00:03:28.108 SO libspdk_env_dpdk.so.15.1 00:03:28.108 LIB libspdk_rdma_provider.a 00:03:28.108 LIB libspdk_jsonrpc.a 00:03:28.108 SO libspdk_rdma_provider.so.7.0 00:03:28.108 SO libspdk_jsonrpc.so.6.0 00:03:28.108 SYMLINK libspdk_env_dpdk.so 00:03:28.108 SYMLINK libspdk_rdma_provider.so 00:03:28.108 SYMLINK libspdk_jsonrpc.so 00:03:28.680 CC lib/rpc/rpc.o 00:03:28.680 LIB libspdk_rpc.a 00:03:28.941 SO libspdk_rpc.so.6.0 00:03:28.941 SYMLINK libspdk_rpc.so 00:03:29.202 CC lib/trace/trace.o 00:03:29.202 CC lib/keyring/keyring.o 00:03:29.202 CC lib/trace/trace_flags.o 00:03:29.202 CC lib/keyring/keyring_rpc.o 00:03:29.202 CC lib/trace/trace_rpc.o 00:03:29.202 CC lib/notify/notify.o 00:03:29.202 CC lib/notify/notify_rpc.o 00:03:29.465 LIB libspdk_notify.a 00:03:29.465 SO libspdk_notify.so.6.0 00:03:29.465 LIB libspdk_trace.a 00:03:29.465 LIB libspdk_keyring.a 00:03:29.465 SO libspdk_keyring.so.2.0 00:03:29.465 SO libspdk_trace.so.11.0 00:03:29.465 SYMLINK libspdk_notify.so 00:03:29.727 SYMLINK libspdk_keyring.so 00:03:29.727 SYMLINK libspdk_trace.so 00:03:29.987 CC lib/sock/sock.o 00:03:29.987 CC lib/thread/thread.o 00:03:29.987 CC lib/thread/iobuf.o 00:03:29.987 CC lib/sock/sock_rpc.o 00:03:30.248 LIB libspdk_sock.a 00:03:30.248 SO libspdk_sock.so.10.0 00:03:30.507 SYMLINK libspdk_sock.so 00:03:30.767 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:30.767 CC lib/nvme/nvme_ctrlr.o 00:03:30.767 CC lib/nvme/nvme_fabric.o 00:03:30.767 CC lib/nvme/nvme_ns_cmd.o 00:03:30.767 CC lib/nvme/nvme_ns.o 00:03:30.767 CC lib/nvme/nvme_pcie_common.o 00:03:30.767 CC lib/nvme/nvme_pcie.o 00:03:30.767 CC lib/nvme/nvme_qpair.o 00:03:30.767 CC lib/nvme/nvme.o 00:03:30.767 CC lib/nvme/nvme_quirks.o 00:03:30.767 CC lib/nvme/nvme_transport.o 00:03:30.767 CC lib/nvme/nvme_discovery.o 00:03:30.767 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:30.767 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:30.767 CC lib/nvme/nvme_tcp.o 00:03:30.767 CC lib/nvme/nvme_opal.o 00:03:30.767 CC lib/nvme/nvme_io_msg.o 00:03:30.767 CC lib/nvme/nvme_poll_group.o 00:03:30.767 CC lib/nvme/nvme_zns.o 00:03:30.767 CC lib/nvme/nvme_stubs.o 00:03:30.767 CC lib/nvme/nvme_auth.o 00:03:30.767 CC lib/nvme/nvme_cuse.o 00:03:30.767 CC lib/nvme/nvme_vfio_user.o 00:03:30.767 CC lib/nvme/nvme_rdma.o 00:03:30.767 LIB libspdk_thread.a 00:03:31.026 SO libspdk_thread.so.11.0 00:03:31.026 SYMLINK libspdk_thread.so 00:03:31.286 CC lib/fsdev/fsdev.o 00:03:31.286 CC lib/fsdev/fsdev_io.o 00:03:31.286 CC lib/fsdev/fsdev_rpc.o 00:03:31.286 CC lib/vfu_tgt/tgt_endpoint.o 00:03:31.286 CC lib/vfu_tgt/tgt_rpc.o 00:03:31.287 CC lib/virtio/virtio_vhost_user.o 00:03:31.287 CC lib/virtio/virtio.o 00:03:31.287 CC lib/virtio/virtio_vfio_user.o 00:03:31.287 CC lib/virtio/virtio_pci.o 00:03:31.287 CC lib/accel/accel_rpc.o 00:03:31.287 CC lib/accel/accel.o 00:03:31.287 CC lib/blob/blobstore.o 00:03:31.287 CC lib/accel/accel_sw.o 00:03:31.287 CC lib/blob/request.o 00:03:31.287 CC lib/blob/blob_bs_dev.o 00:03:31.287 CC lib/blob/zeroes.o 00:03:31.287 CC lib/init/json_config.o 00:03:31.287 CC lib/init/rpc.o 00:03:31.287 CC lib/init/subsystem.o 00:03:31.287 CC lib/init/subsystem_rpc.o 00:03:31.557 LIB libspdk_init.a 00:03:31.557 SO libspdk_init.so.6.0 00:03:31.982 LIB libspdk_vfu_tgt.a 00:03:31.982 LIB libspdk_virtio.a 00:03:31.982 SO libspdk_vfu_tgt.so.3.0 00:03:31.982 SO libspdk_virtio.so.7.0 00:03:31.982 SYMLINK libspdk_init.so 00:03:31.982 SYMLINK libspdk_vfu_tgt.so 00:03:31.982 SYMLINK libspdk_virtio.so 00:03:31.982 LIB libspdk_fsdev.a 00:03:31.982 SO libspdk_fsdev.so.2.0 00:03:31.982 SYMLINK libspdk_fsdev.so 00:03:32.244 CC lib/event/app.o 00:03:32.244 CC lib/event/reactor.o 00:03:32.244 CC lib/event/log_rpc.o 00:03:32.244 CC lib/event/app_rpc.o 00:03:32.244 CC lib/event/scheduler_static.o 00:03:32.244 LIB libspdk_accel.a 00:03:32.504 SO libspdk_accel.so.16.0 00:03:32.505 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:32.505 SYMLINK libspdk_accel.so 00:03:32.505 LIB libspdk_event.a 00:03:32.505 SO libspdk_event.so.14.0 00:03:32.765 SYMLINK libspdk_event.so 00:03:32.765 LIB libspdk_nvme.a 00:03:32.765 CC lib/bdev/bdev.o 00:03:32.765 CC lib/bdev/bdev_rpc.o 00:03:32.765 CC lib/bdev/bdev_zone.o 00:03:32.765 CC lib/bdev/part.o 00:03:32.765 CC lib/bdev/scsi_nvme.o 00:03:33.026 SO libspdk_nvme.so.15.0 00:03:33.026 LIB libspdk_fuse_dispatcher.a 00:03:33.026 SO libspdk_fuse_dispatcher.so.1.0 00:03:33.026 SYMLINK libspdk_fuse_dispatcher.so 00:03:33.026 SYMLINK libspdk_nvme.so 00:03:33.968 LIB libspdk_blob.a 00:03:33.968 SO libspdk_blob.so.12.0 00:03:34.230 SYMLINK libspdk_blob.so 00:03:34.491 CC lib/lvol/lvol.o 00:03:34.491 CC lib/blobfs/blobfs.o 00:03:34.491 CC lib/blobfs/tree.o 00:03:35.437 LIB libspdk_bdev.a 00:03:35.437 SO libspdk_bdev.so.17.0 00:03:35.437 LIB libspdk_blobfs.a 00:03:35.437 SO libspdk_blobfs.so.11.0 00:03:35.437 SYMLINK libspdk_bdev.so 00:03:35.437 LIB libspdk_lvol.a 00:03:35.437 SYMLINK libspdk_blobfs.so 00:03:35.437 SO libspdk_lvol.so.11.0 00:03:35.437 SYMLINK libspdk_lvol.so 00:03:35.697 CC lib/nvmf/ctrlr.o 00:03:35.697 CC lib/nvmf/ctrlr_discovery.o 00:03:35.697 CC lib/nvmf/ctrlr_bdev.o 00:03:35.697 CC lib/nvmf/subsystem.o 00:03:35.697 CC lib/nvmf/nvmf.o 00:03:35.697 CC lib/nvmf/nvmf_rpc.o 00:03:35.697 CC lib/nvmf/transport.o 00:03:35.697 CC lib/nvmf/tcp.o 00:03:35.697 CC lib/nvmf/stubs.o 00:03:35.697 CC lib/nvmf/mdns_server.o 00:03:35.697 CC lib/nvmf/vfio_user.o 00:03:35.697 CC lib/ftl/ftl_core.o 00:03:35.697 CC lib/nvmf/rdma.o 00:03:35.697 CC lib/scsi/dev.o 00:03:35.697 CC lib/nvmf/auth.o 00:03:35.697 CC lib/ftl/ftl_init.o 00:03:35.697 CC lib/ublk/ublk.o 00:03:35.697 CC lib/scsi/lun.o 00:03:35.697 CC lib/ftl/ftl_layout.o 00:03:35.697 CC lib/ublk/ublk_rpc.o 00:03:35.697 CC lib/scsi/port.o 00:03:35.697 CC lib/ftl/ftl_debug.o 00:03:35.697 CC lib/nbd/nbd.o 00:03:35.697 CC lib/ftl/ftl_io.o 00:03:35.697 CC lib/scsi/scsi.o 00:03:35.697 CC lib/nbd/nbd_rpc.o 00:03:35.697 CC lib/ftl/ftl_sb.o 00:03:35.697 CC lib/scsi/scsi_bdev.o 00:03:35.697 CC lib/scsi/scsi_pr.o 00:03:35.697 CC lib/ftl/ftl_l2p.o 00:03:35.697 CC lib/scsi/scsi_rpc.o 00:03:35.697 CC lib/ftl/ftl_l2p_flat.o 00:03:35.697 CC lib/scsi/task.o 00:03:35.697 CC lib/ftl/ftl_nv_cache.o 00:03:35.697 CC lib/ftl/ftl_band.o 00:03:35.697 CC lib/ftl/ftl_band_ops.o 00:03:35.697 CC lib/ftl/ftl_writer.o 00:03:35.697 CC lib/ftl/ftl_rq.o 00:03:35.697 CC lib/ftl/ftl_l2p_cache.o 00:03:35.697 CC lib/ftl/ftl_reloc.o 00:03:35.697 CC lib/ftl/ftl_p2l.o 00:03:35.697 CC lib/ftl/ftl_p2l_log.o 00:03:35.697 CC lib/ftl/mngt/ftl_mngt.o 00:03:35.697 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:35.697 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:35.697 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:35.697 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:35.697 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:35.697 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:35.697 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:35.697 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:35.697 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:35.697 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:35.697 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:35.697 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:35.697 CC lib/ftl/utils/ftl_conf.o 00:03:35.697 CC lib/ftl/utils/ftl_md.o 00:03:35.697 CC lib/ftl/utils/ftl_mempool.o 00:03:35.697 CC lib/ftl/utils/ftl_bitmap.o 00:03:35.697 CC lib/ftl/utils/ftl_property.o 00:03:35.697 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:35.697 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:35.697 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:35.697 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:35.697 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:35.697 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:35.697 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:35.697 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:35.697 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:35.697 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:35.697 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:35.697 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:35.697 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:35.697 CC lib/ftl/base/ftl_base_dev.o 00:03:35.697 CC lib/ftl/base/ftl_base_bdev.o 00:03:35.697 CC lib/ftl/ftl_trace.o 00:03:36.266 LIB libspdk_nbd.a 00:03:36.266 SO libspdk_nbd.so.7.0 00:03:36.266 LIB libspdk_scsi.a 00:03:36.266 SYMLINK libspdk_nbd.so 00:03:36.266 SO libspdk_scsi.so.9.0 00:03:36.527 LIB libspdk_ublk.a 00:03:36.527 SO libspdk_ublk.so.3.0 00:03:36.527 SYMLINK libspdk_scsi.so 00:03:36.527 SYMLINK libspdk_ublk.so 00:03:36.788 LIB libspdk_ftl.a 00:03:36.788 CC lib/vhost/vhost.o 00:03:36.788 CC lib/iscsi/conn.o 00:03:36.788 CC lib/vhost/vhost_rpc.o 00:03:36.788 CC lib/iscsi/init_grp.o 00:03:36.788 CC lib/vhost/vhost_scsi.o 00:03:36.788 CC lib/iscsi/iscsi.o 00:03:36.788 CC lib/vhost/vhost_blk.o 00:03:36.788 CC lib/iscsi/param.o 00:03:36.788 CC lib/iscsi/portal_grp.o 00:03:36.788 CC lib/vhost/rte_vhost_user.o 00:03:36.788 CC lib/iscsi/tgt_node.o 00:03:36.788 CC lib/iscsi/iscsi_subsystem.o 00:03:36.788 CC lib/iscsi/iscsi_rpc.o 00:03:36.788 CC lib/iscsi/task.o 00:03:36.788 SO libspdk_ftl.so.9.0 00:03:37.049 SYMLINK libspdk_ftl.so 00:03:37.621 LIB libspdk_nvmf.a 00:03:37.621 SO libspdk_nvmf.so.20.0 00:03:37.882 LIB libspdk_vhost.a 00:03:37.883 SO libspdk_vhost.so.8.0 00:03:37.883 SYMLINK libspdk_nvmf.so 00:03:37.883 SYMLINK libspdk_vhost.so 00:03:38.144 LIB libspdk_iscsi.a 00:03:38.144 SO libspdk_iscsi.so.8.0 00:03:38.144 SYMLINK libspdk_iscsi.so 00:03:38.716 CC module/env_dpdk/env_dpdk_rpc.o 00:03:38.716 CC module/vfu_device/vfu_virtio.o 00:03:38.716 CC module/vfu_device/vfu_virtio_blk.o 00:03:38.716 CC module/vfu_device/vfu_virtio_scsi.o 00:03:38.716 CC module/vfu_device/vfu_virtio_rpc.o 00:03:38.716 CC module/vfu_device/vfu_virtio_fs.o 00:03:38.976 LIB libspdk_env_dpdk_rpc.a 00:03:38.976 CC module/accel/error/accel_error.o 00:03:38.976 CC module/accel/ioat/accel_ioat.o 00:03:38.976 CC module/accel/ioat/accel_ioat_rpc.o 00:03:38.976 CC module/accel/error/accel_error_rpc.o 00:03:38.976 CC module/sock/posix/posix.o 00:03:38.976 CC module/accel/dsa/accel_dsa.o 00:03:38.976 CC module/fsdev/aio/fsdev_aio.o 00:03:38.976 CC module/accel/dsa/accel_dsa_rpc.o 00:03:38.976 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:38.976 SO libspdk_env_dpdk_rpc.so.6.0 00:03:38.976 CC module/fsdev/aio/linux_aio_mgr.o 00:03:38.976 CC module/accel/iaa/accel_iaa.o 00:03:38.976 CC module/blob/bdev/blob_bdev.o 00:03:38.976 CC module/keyring/file/keyring_rpc.o 00:03:38.976 CC module/accel/iaa/accel_iaa_rpc.o 00:03:38.976 CC module/scheduler/gscheduler/gscheduler.o 00:03:38.976 CC module/keyring/file/keyring.o 00:03:38.976 CC module/keyring/linux/keyring.o 00:03:38.976 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:38.976 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:38.976 CC module/keyring/linux/keyring_rpc.o 00:03:38.976 SYMLINK libspdk_env_dpdk_rpc.so 00:03:39.236 LIB libspdk_scheduler_gscheduler.a 00:03:39.236 LIB libspdk_keyring_file.a 00:03:39.236 LIB libspdk_keyring_linux.a 00:03:39.236 LIB libspdk_accel_ioat.a 00:03:39.236 LIB libspdk_scheduler_dpdk_governor.a 00:03:39.236 SO libspdk_keyring_file.so.2.0 00:03:39.236 SO libspdk_scheduler_gscheduler.so.4.0 00:03:39.236 SO libspdk_keyring_linux.so.1.0 00:03:39.236 LIB libspdk_accel_error.a 00:03:39.236 LIB libspdk_accel_iaa.a 00:03:39.236 SO libspdk_accel_ioat.so.6.0 00:03:39.236 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:39.236 LIB libspdk_scheduler_dynamic.a 00:03:39.236 SO libspdk_accel_error.so.2.0 00:03:39.236 SO libspdk_accel_iaa.so.3.0 00:03:39.236 SO libspdk_scheduler_dynamic.so.4.0 00:03:39.236 LIB libspdk_accel_dsa.a 00:03:39.236 SYMLINK libspdk_keyring_linux.so 00:03:39.236 SYMLINK libspdk_keyring_file.so 00:03:39.236 SYMLINK libspdk_scheduler_gscheduler.so 00:03:39.236 SYMLINK libspdk_accel_ioat.so 00:03:39.236 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:39.236 LIB libspdk_blob_bdev.a 00:03:39.236 SO libspdk_accel_dsa.so.5.0 00:03:39.497 SYMLINK libspdk_accel_error.so 00:03:39.497 SO libspdk_blob_bdev.so.12.0 00:03:39.497 SYMLINK libspdk_accel_iaa.so 00:03:39.497 SYMLINK libspdk_scheduler_dynamic.so 00:03:39.497 LIB libspdk_vfu_device.a 00:03:39.497 SYMLINK libspdk_accel_dsa.so 00:03:39.497 SYMLINK libspdk_blob_bdev.so 00:03:39.497 SO libspdk_vfu_device.so.3.0 00:03:39.497 SYMLINK libspdk_vfu_device.so 00:03:39.759 LIB libspdk_fsdev_aio.a 00:03:39.759 SO libspdk_fsdev_aio.so.1.0 00:03:39.759 LIB libspdk_sock_posix.a 00:03:39.759 SYMLINK libspdk_fsdev_aio.so 00:03:39.759 SO libspdk_sock_posix.so.6.0 00:03:39.759 SYMLINK libspdk_sock_posix.so 00:03:40.020 CC module/bdev/delay/vbdev_delay.o 00:03:40.020 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:40.020 CC module/bdev/nvme/bdev_nvme.o 00:03:40.020 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:40.020 CC module/bdev/nvme/nvme_rpc.o 00:03:40.020 CC module/bdev/null/bdev_null.o 00:03:40.020 CC module/bdev/nvme/bdev_mdns_client.o 00:03:40.020 CC module/bdev/gpt/gpt.o 00:03:40.020 CC module/bdev/nvme/vbdev_opal.o 00:03:40.020 CC module/bdev/null/bdev_null_rpc.o 00:03:40.020 CC module/bdev/gpt/vbdev_gpt.o 00:03:40.020 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:40.020 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:40.020 CC module/bdev/error/vbdev_error.o 00:03:40.020 CC module/bdev/error/vbdev_error_rpc.o 00:03:40.020 CC module/blobfs/bdev/blobfs_bdev.o 00:03:40.020 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:40.020 CC module/bdev/passthru/vbdev_passthru.o 00:03:40.020 CC module/bdev/raid/bdev_raid.o 00:03:40.020 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:40.020 CC module/bdev/malloc/bdev_malloc.o 00:03:40.020 CC module/bdev/split/vbdev_split.o 00:03:40.020 CC module/bdev/raid/bdev_raid_rpc.o 00:03:40.020 CC module/bdev/split/vbdev_split_rpc.o 00:03:40.020 CC module/bdev/raid/bdev_raid_sb.o 00:03:40.020 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:40.020 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:40.020 CC module/bdev/raid/raid0.o 00:03:40.020 CC module/bdev/lvol/vbdev_lvol.o 00:03:40.020 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:40.021 CC module/bdev/raid/raid1.o 00:03:40.021 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:40.021 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:40.021 CC module/bdev/raid/concat.o 00:03:40.021 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:40.021 CC module/bdev/ftl/bdev_ftl.o 00:03:40.021 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:40.021 CC module/bdev/aio/bdev_aio.o 00:03:40.021 CC module/bdev/aio/bdev_aio_rpc.o 00:03:40.021 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:40.021 CC module/bdev/iscsi/bdev_iscsi.o 00:03:40.021 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:40.281 LIB libspdk_blobfs_bdev.a 00:03:40.281 SO libspdk_blobfs_bdev.so.6.0 00:03:40.281 LIB libspdk_bdev_split.a 00:03:40.281 LIB libspdk_bdev_null.a 00:03:40.281 LIB libspdk_bdev_gpt.a 00:03:40.281 SO libspdk_bdev_split.so.6.0 00:03:40.281 LIB libspdk_bdev_error.a 00:03:40.543 SYMLINK libspdk_blobfs_bdev.so 00:03:40.543 SO libspdk_bdev_gpt.so.6.0 00:03:40.543 LIB libspdk_bdev_passthru.a 00:03:40.543 SO libspdk_bdev_null.so.6.0 00:03:40.543 SO libspdk_bdev_error.so.6.0 00:03:40.543 LIB libspdk_bdev_ftl.a 00:03:40.543 LIB libspdk_bdev_zone_block.a 00:03:40.543 SYMLINK libspdk_bdev_split.so 00:03:40.543 SO libspdk_bdev_passthru.so.6.0 00:03:40.543 LIB libspdk_bdev_delay.a 00:03:40.543 SYMLINK libspdk_bdev_null.so 00:03:40.543 SO libspdk_bdev_zone_block.so.6.0 00:03:40.543 SO libspdk_bdev_ftl.so.6.0 00:03:40.543 LIB libspdk_bdev_iscsi.a 00:03:40.543 SYMLINK libspdk_bdev_gpt.so 00:03:40.543 LIB libspdk_bdev_aio.a 00:03:40.543 SO libspdk_bdev_delay.so.6.0 00:03:40.543 SYMLINK libspdk_bdev_error.so 00:03:40.543 LIB libspdk_bdev_malloc.a 00:03:40.543 SO libspdk_bdev_iscsi.so.6.0 00:03:40.543 SO libspdk_bdev_aio.so.6.0 00:03:40.543 SYMLINK libspdk_bdev_passthru.so 00:03:40.543 SO libspdk_bdev_malloc.so.6.0 00:03:40.543 SYMLINK libspdk_bdev_ftl.so 00:03:40.543 SYMLINK libspdk_bdev_zone_block.so 00:03:40.543 SYMLINK libspdk_bdev_delay.so 00:03:40.543 SYMLINK libspdk_bdev_iscsi.so 00:03:40.543 SYMLINK libspdk_bdev_aio.so 00:03:40.543 LIB libspdk_bdev_lvol.a 00:03:40.543 SYMLINK libspdk_bdev_malloc.so 00:03:40.543 LIB libspdk_bdev_virtio.a 00:03:40.804 SO libspdk_bdev_lvol.so.6.0 00:03:40.804 SO libspdk_bdev_virtio.so.6.0 00:03:40.804 SYMLINK libspdk_bdev_lvol.so 00:03:40.804 SYMLINK libspdk_bdev_virtio.so 00:03:41.066 LIB libspdk_bdev_raid.a 00:03:41.066 SO libspdk_bdev_raid.so.6.0 00:03:41.066 SYMLINK libspdk_bdev_raid.so 00:03:42.454 LIB libspdk_bdev_nvme.a 00:03:42.454 SO libspdk_bdev_nvme.so.7.1 00:03:42.454 SYMLINK libspdk_bdev_nvme.so 00:03:43.399 CC module/event/subsystems/iobuf/iobuf.o 00:03:43.399 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:43.399 CC module/event/subsystems/vmd/vmd.o 00:03:43.399 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:43.399 CC module/event/subsystems/keyring/keyring.o 00:03:43.399 CC module/event/subsystems/sock/sock.o 00:03:43.399 CC module/event/subsystems/scheduler/scheduler.o 00:03:43.399 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:43.399 CC module/event/subsystems/fsdev/fsdev.o 00:03:43.399 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:43.399 LIB libspdk_event_vhost_blk.a 00:03:43.399 LIB libspdk_event_vfu_tgt.a 00:03:43.399 LIB libspdk_event_keyring.a 00:03:43.399 LIB libspdk_event_vmd.a 00:03:43.399 LIB libspdk_event_iobuf.a 00:03:43.399 LIB libspdk_event_scheduler.a 00:03:43.399 LIB libspdk_event_fsdev.a 00:03:43.399 SO libspdk_event_vhost_blk.so.3.0 00:03:43.399 SO libspdk_event_vfu_tgt.so.3.0 00:03:43.399 SO libspdk_event_keyring.so.1.0 00:03:43.399 SO libspdk_event_vmd.so.6.0 00:03:43.660 SO libspdk_event_scheduler.so.4.0 00:03:43.660 SO libspdk_event_fsdev.so.1.0 00:03:43.660 SO libspdk_event_iobuf.so.3.0 00:03:43.660 LIB libspdk_event_sock.a 00:03:43.660 SYMLINK libspdk_event_vhost_blk.so 00:03:43.660 SYMLINK libspdk_event_vfu_tgt.so 00:03:43.660 SYMLINK libspdk_event_keyring.so 00:03:43.660 SYMLINK libspdk_event_vmd.so 00:03:43.660 SYMLINK libspdk_event_fsdev.so 00:03:43.660 SO libspdk_event_sock.so.5.0 00:03:43.660 SYMLINK libspdk_event_scheduler.so 00:03:43.660 SYMLINK libspdk_event_iobuf.so 00:03:43.661 SYMLINK libspdk_event_sock.so 00:03:43.922 CC module/event/subsystems/accel/accel.o 00:03:44.184 LIB libspdk_event_accel.a 00:03:44.184 SO libspdk_event_accel.so.6.0 00:03:44.184 SYMLINK libspdk_event_accel.so 00:03:44.445 CC module/event/subsystems/bdev/bdev.o 00:03:44.708 LIB libspdk_event_bdev.a 00:03:44.708 SO libspdk_event_bdev.so.6.0 00:03:44.983 SYMLINK libspdk_event_bdev.so 00:03:45.246 CC module/event/subsystems/scsi/scsi.o 00:03:45.246 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:45.246 CC module/event/subsystems/nbd/nbd.o 00:03:45.246 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:45.246 CC module/event/subsystems/ublk/ublk.o 00:03:45.246 LIB libspdk_event_nbd.a 00:03:45.508 LIB libspdk_event_ublk.a 00:03:45.508 LIB libspdk_event_scsi.a 00:03:45.508 SO libspdk_event_nbd.so.6.0 00:03:45.508 SO libspdk_event_ublk.so.3.0 00:03:45.508 SO libspdk_event_scsi.so.6.0 00:03:45.508 LIB libspdk_event_nvmf.a 00:03:45.508 SYMLINK libspdk_event_nbd.so 00:03:45.508 SYMLINK libspdk_event_ublk.so 00:03:45.508 SYMLINK libspdk_event_scsi.so 00:03:45.508 SO libspdk_event_nvmf.so.6.0 00:03:45.508 SYMLINK libspdk_event_nvmf.so 00:03:45.770 CC module/event/subsystems/iscsi/iscsi.o 00:03:45.770 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:46.031 LIB libspdk_event_vhost_scsi.a 00:03:46.031 LIB libspdk_event_iscsi.a 00:03:46.031 SO libspdk_event_vhost_scsi.so.3.0 00:03:46.031 SO libspdk_event_iscsi.so.6.0 00:03:46.031 SYMLINK libspdk_event_vhost_scsi.so 00:03:46.031 SYMLINK libspdk_event_iscsi.so 00:03:46.292 SO libspdk.so.6.0 00:03:46.292 SYMLINK libspdk.so 00:03:46.867 CXX app/trace/trace.o 00:03:46.867 TEST_HEADER include/spdk/accel.h 00:03:46.867 TEST_HEADER include/spdk/accel_module.h 00:03:46.867 TEST_HEADER include/spdk/assert.h 00:03:46.867 TEST_HEADER include/spdk/base64.h 00:03:46.867 TEST_HEADER include/spdk/barrier.h 00:03:46.867 CC app/spdk_nvme_discover/discovery_aer.o 00:03:46.867 TEST_HEADER include/spdk/bdev.h 00:03:46.867 TEST_HEADER include/spdk/bdev_module.h 00:03:46.867 TEST_HEADER include/spdk/bdev_zone.h 00:03:46.867 CC app/spdk_top/spdk_top.o 00:03:46.867 TEST_HEADER include/spdk/bit_array.h 00:03:46.867 CC app/spdk_nvme_perf/perf.o 00:03:46.867 CC app/spdk_nvme_identify/identify.o 00:03:46.867 TEST_HEADER include/spdk/bit_pool.h 00:03:46.867 TEST_HEADER include/spdk/blob_bdev.h 00:03:46.867 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:46.867 TEST_HEADER include/spdk/blobfs.h 00:03:46.867 CC app/spdk_lspci/spdk_lspci.o 00:03:46.867 TEST_HEADER include/spdk/blob.h 00:03:46.867 CC test/rpc_client/rpc_client_test.o 00:03:46.867 TEST_HEADER include/spdk/conf.h 00:03:46.867 CC app/trace_record/trace_record.o 00:03:46.867 TEST_HEADER include/spdk/config.h 00:03:46.867 TEST_HEADER include/spdk/cpuset.h 00:03:46.867 TEST_HEADER include/spdk/crc16.h 00:03:46.867 TEST_HEADER include/spdk/crc32.h 00:03:46.867 TEST_HEADER include/spdk/crc64.h 00:03:46.867 TEST_HEADER include/spdk/dif.h 00:03:46.867 TEST_HEADER include/spdk/dma.h 00:03:46.867 TEST_HEADER include/spdk/endian.h 00:03:46.867 TEST_HEADER include/spdk/env_dpdk.h 00:03:46.867 TEST_HEADER include/spdk/env.h 00:03:46.867 TEST_HEADER include/spdk/event.h 00:03:46.867 TEST_HEADER include/spdk/fd_group.h 00:03:46.867 TEST_HEADER include/spdk/fd.h 00:03:46.867 TEST_HEADER include/spdk/file.h 00:03:46.867 TEST_HEADER include/spdk/fsdev.h 00:03:46.867 TEST_HEADER include/spdk/fsdev_module.h 00:03:46.867 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:46.867 TEST_HEADER include/spdk/ftl.h 00:03:46.867 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:46.867 TEST_HEADER include/spdk/gpt_spec.h 00:03:46.867 TEST_HEADER include/spdk/hexlify.h 00:03:46.867 TEST_HEADER include/spdk/histogram_data.h 00:03:46.867 TEST_HEADER include/spdk/idxd.h 00:03:46.867 TEST_HEADER include/spdk/idxd_spec.h 00:03:46.867 TEST_HEADER include/spdk/init.h 00:03:46.867 TEST_HEADER include/spdk/ioat.h 00:03:46.867 TEST_HEADER include/spdk/ioat_spec.h 00:03:46.867 TEST_HEADER include/spdk/iscsi_spec.h 00:03:46.867 TEST_HEADER include/spdk/json.h 00:03:46.867 TEST_HEADER include/spdk/jsonrpc.h 00:03:46.867 CC app/nvmf_tgt/nvmf_main.o 00:03:46.867 CC app/iscsi_tgt/iscsi_tgt.o 00:03:46.867 TEST_HEADER include/spdk/keyring_module.h 00:03:46.867 TEST_HEADER include/spdk/keyring.h 00:03:46.867 CC app/spdk_dd/spdk_dd.o 00:03:46.867 TEST_HEADER include/spdk/likely.h 00:03:46.867 TEST_HEADER include/spdk/log.h 00:03:46.867 TEST_HEADER include/spdk/lvol.h 00:03:46.867 TEST_HEADER include/spdk/memory.h 00:03:46.867 TEST_HEADER include/spdk/mmio.h 00:03:46.867 TEST_HEADER include/spdk/md5.h 00:03:46.867 TEST_HEADER include/spdk/nbd.h 00:03:46.867 TEST_HEADER include/spdk/notify.h 00:03:46.867 TEST_HEADER include/spdk/net.h 00:03:46.867 TEST_HEADER include/spdk/nvme.h 00:03:46.867 TEST_HEADER include/spdk/nvme_intel.h 00:03:46.868 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:46.868 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:46.868 TEST_HEADER include/spdk/nvme_spec.h 00:03:46.868 TEST_HEADER include/spdk/nvme_zns.h 00:03:46.868 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:46.868 TEST_HEADER include/spdk/nvmf.h 00:03:46.868 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:46.868 TEST_HEADER include/spdk/nvmf_spec.h 00:03:46.868 TEST_HEADER include/spdk/nvmf_transport.h 00:03:46.868 TEST_HEADER include/spdk/opal_spec.h 00:03:46.868 CC app/spdk_tgt/spdk_tgt.o 00:03:46.868 TEST_HEADER include/spdk/opal.h 00:03:46.868 TEST_HEADER include/spdk/pci_ids.h 00:03:46.868 TEST_HEADER include/spdk/pipe.h 00:03:46.868 TEST_HEADER include/spdk/queue.h 00:03:46.868 TEST_HEADER include/spdk/rpc.h 00:03:46.868 TEST_HEADER include/spdk/reduce.h 00:03:46.868 TEST_HEADER include/spdk/scheduler.h 00:03:46.868 TEST_HEADER include/spdk/scsi.h 00:03:46.868 TEST_HEADER include/spdk/scsi_spec.h 00:03:46.868 TEST_HEADER include/spdk/sock.h 00:03:46.868 TEST_HEADER include/spdk/stdinc.h 00:03:46.868 TEST_HEADER include/spdk/thread.h 00:03:46.868 TEST_HEADER include/spdk/string.h 00:03:46.868 TEST_HEADER include/spdk/trace_parser.h 00:03:46.868 TEST_HEADER include/spdk/trace.h 00:03:46.868 TEST_HEADER include/spdk/ublk.h 00:03:46.868 TEST_HEADER include/spdk/tree.h 00:03:46.868 TEST_HEADER include/spdk/util.h 00:03:46.868 TEST_HEADER include/spdk/version.h 00:03:46.868 TEST_HEADER include/spdk/uuid.h 00:03:46.868 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:46.868 TEST_HEADER include/spdk/vhost.h 00:03:46.868 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:46.868 TEST_HEADER include/spdk/xor.h 00:03:46.868 TEST_HEADER include/spdk/vmd.h 00:03:46.868 TEST_HEADER include/spdk/zipf.h 00:03:46.868 CXX test/cpp_headers/accel_module.o 00:03:46.868 CXX test/cpp_headers/accel.o 00:03:46.868 CXX test/cpp_headers/assert.o 00:03:46.868 CXX test/cpp_headers/barrier.o 00:03:46.868 CXX test/cpp_headers/base64.o 00:03:46.868 CXX test/cpp_headers/bdev.o 00:03:46.868 CXX test/cpp_headers/bdev_module.o 00:03:46.868 CXX test/cpp_headers/bdev_zone.o 00:03:46.868 CXX test/cpp_headers/bit_pool.o 00:03:46.868 CXX test/cpp_headers/bit_array.o 00:03:46.868 CXX test/cpp_headers/blob_bdev.o 00:03:46.868 CXX test/cpp_headers/blobfs_bdev.o 00:03:46.868 CXX test/cpp_headers/blobfs.o 00:03:46.868 CXX test/cpp_headers/blob.o 00:03:46.868 CXX test/cpp_headers/config.o 00:03:46.868 CXX test/cpp_headers/cpuset.o 00:03:46.868 CXX test/cpp_headers/conf.o 00:03:46.868 CXX test/cpp_headers/crc32.o 00:03:46.868 CXX test/cpp_headers/crc16.o 00:03:46.868 CXX test/cpp_headers/crc64.o 00:03:46.868 CXX test/cpp_headers/dif.o 00:03:46.868 CXX test/cpp_headers/endian.o 00:03:46.868 CXX test/cpp_headers/dma.o 00:03:46.868 CXX test/cpp_headers/env_dpdk.o 00:03:46.868 CXX test/cpp_headers/event.o 00:03:46.868 CXX test/cpp_headers/env.o 00:03:46.868 CXX test/cpp_headers/file.o 00:03:46.868 CXX test/cpp_headers/fd_group.o 00:03:46.868 CXX test/cpp_headers/fd.o 00:03:46.868 CXX test/cpp_headers/fsdev.o 00:03:46.868 CXX test/cpp_headers/ftl.o 00:03:46.868 CXX test/cpp_headers/fsdev_module.o 00:03:46.868 CXX test/cpp_headers/fuse_dispatcher.o 00:03:46.868 CXX test/cpp_headers/gpt_spec.o 00:03:46.868 CXX test/cpp_headers/histogram_data.o 00:03:46.868 CXX test/cpp_headers/hexlify.o 00:03:46.868 CXX test/cpp_headers/idxd_spec.o 00:03:46.868 CXX test/cpp_headers/init.o 00:03:46.868 CXX test/cpp_headers/idxd.o 00:03:46.868 CXX test/cpp_headers/ioat.o 00:03:46.868 CXX test/cpp_headers/ioat_spec.o 00:03:46.868 CXX test/cpp_headers/iscsi_spec.o 00:03:46.868 CXX test/cpp_headers/jsonrpc.o 00:03:46.868 CXX test/cpp_headers/keyring.o 00:03:46.868 CXX test/cpp_headers/json.o 00:03:46.868 CXX test/cpp_headers/log.o 00:03:46.868 CXX test/cpp_headers/keyring_module.o 00:03:46.868 CXX test/cpp_headers/md5.o 00:03:46.868 CXX test/cpp_headers/memory.o 00:03:46.868 CXX test/cpp_headers/likely.o 00:03:46.868 CXX test/cpp_headers/lvol.o 00:03:46.868 CXX test/cpp_headers/nbd.o 00:03:46.868 CXX test/cpp_headers/mmio.o 00:03:46.868 CXX test/cpp_headers/notify.o 00:03:46.868 CXX test/cpp_headers/net.o 00:03:46.868 CXX test/cpp_headers/nvme.o 00:03:46.868 CXX test/cpp_headers/nvme_ocssd.o 00:03:46.868 CXX test/cpp_headers/nvme_intel.o 00:03:46.868 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:46.868 CXX test/cpp_headers/nvme_spec.o 00:03:46.868 CXX test/cpp_headers/nvmf_cmd.o 00:03:46.868 CXX test/cpp_headers/nvme_zns.o 00:03:46.868 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:46.868 CXX test/cpp_headers/nvmf.o 00:03:47.135 CXX test/cpp_headers/opal.o 00:03:47.135 CXX test/cpp_headers/nvmf_transport.o 00:03:47.135 CXX test/cpp_headers/opal_spec.o 00:03:47.135 CXX test/cpp_headers/nvmf_spec.o 00:03:47.135 CXX test/cpp_headers/pipe.o 00:03:47.135 CXX test/cpp_headers/pci_ids.o 00:03:47.135 CXX test/cpp_headers/queue.o 00:03:47.135 CXX test/cpp_headers/reduce.o 00:03:47.135 CXX test/cpp_headers/rpc.o 00:03:47.135 CXX test/cpp_headers/scheduler.o 00:03:47.135 CXX test/cpp_headers/scsi.o 00:03:47.135 CXX test/cpp_headers/scsi_spec.o 00:03:47.135 CXX test/cpp_headers/stdinc.o 00:03:47.135 CXX test/cpp_headers/sock.o 00:03:47.135 CXX test/cpp_headers/string.o 00:03:47.135 CXX test/cpp_headers/thread.o 00:03:47.135 CXX test/cpp_headers/trace.o 00:03:47.135 CXX test/cpp_headers/trace_parser.o 00:03:47.135 CXX test/cpp_headers/ublk.o 00:03:47.135 CXX test/cpp_headers/tree.o 00:03:47.135 CXX test/cpp_headers/util.o 00:03:47.135 CXX test/cpp_headers/uuid.o 00:03:47.135 CC test/thread/poller_perf/poller_perf.o 00:03:47.135 CXX test/cpp_headers/vfio_user_spec.o 00:03:47.135 CXX test/cpp_headers/version.o 00:03:47.135 CXX test/cpp_headers/vfio_user_pci.o 00:03:47.135 CC examples/ioat/perf/perf.o 00:03:47.135 CXX test/cpp_headers/xor.o 00:03:47.135 CXX test/cpp_headers/vhost.o 00:03:47.135 CXX test/cpp_headers/vmd.o 00:03:47.135 CXX test/cpp_headers/zipf.o 00:03:47.135 CC examples/util/zipf/zipf.o 00:03:47.135 CC examples/ioat/verify/verify.o 00:03:47.135 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:47.135 CC test/env/vtophys/vtophys.o 00:03:47.135 CC test/env/pci/pci_ut.o 00:03:47.135 LINK spdk_lspci 00:03:47.135 CC test/env/memory/memory_ut.o 00:03:47.135 CC test/app/jsoncat/jsoncat.o 00:03:47.135 CC app/fio/nvme/fio_plugin.o 00:03:47.135 CC test/app/histogram_perf/histogram_perf.o 00:03:47.135 CC test/app/stub/stub.o 00:03:47.135 CC test/dma/test_dma/test_dma.o 00:03:47.135 CC app/fio/bdev/fio_plugin.o 00:03:47.406 LINK rpc_client_test 00:03:47.406 CC test/app/bdev_svc/bdev_svc.o 00:03:47.406 LINK interrupt_tgt 00:03:47.406 LINK spdk_nvme_discover 00:03:47.406 LINK nvmf_tgt 00:03:47.669 LINK spdk_tgt 00:03:47.669 CC test/env/mem_callbacks/mem_callbacks.o 00:03:47.669 LINK spdk_trace_record 00:03:47.669 LINK iscsi_tgt 00:03:47.669 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:47.669 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:47.669 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:47.669 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:47.670 LINK stub 00:03:47.930 LINK poller_perf 00:03:47.930 LINK verify 00:03:47.930 LINK vtophys 00:03:47.930 LINK zipf 00:03:48.191 LINK env_dpdk_post_init 00:03:48.191 LINK histogram_perf 00:03:48.191 LINK ioat_perf 00:03:48.191 LINK jsoncat 00:03:48.191 LINK spdk_trace 00:03:48.191 LINK spdk_dd 00:03:48.191 LINK bdev_svc 00:03:48.451 LINK spdk_bdev 00:03:48.451 LINK pci_ut 00:03:48.451 LINK vhost_fuzz 00:03:48.451 LINK nvme_fuzz 00:03:48.451 CC test/event/event_perf/event_perf.o 00:03:48.451 CC test/event/reactor_perf/reactor_perf.o 00:03:48.451 CC test/event/reactor/reactor.o 00:03:48.451 LINK spdk_nvme 00:03:48.451 CC test/event/scheduler/scheduler.o 00:03:48.451 LINK spdk_nvme_perf 00:03:48.451 CC test/event/app_repeat/app_repeat.o 00:03:48.451 LINK test_dma 00:03:48.451 LINK spdk_top 00:03:48.451 LINK mem_callbacks 00:03:48.451 CC app/vhost/vhost.o 00:03:48.713 LINK spdk_nvme_identify 00:03:48.713 CC examples/vmd/led/led.o 00:03:48.713 CC examples/vmd/lsvmd/lsvmd.o 00:03:48.713 CC examples/idxd/perf/perf.o 00:03:48.713 CC examples/sock/hello_world/hello_sock.o 00:03:48.713 LINK reactor_perf 00:03:48.713 CC examples/thread/thread/thread_ex.o 00:03:48.713 LINK event_perf 00:03:48.713 LINK reactor 00:03:48.713 LINK app_repeat 00:03:48.713 LINK scheduler 00:03:48.713 LINK lsvmd 00:03:48.713 LINK vhost 00:03:48.974 LINK led 00:03:48.974 LINK hello_sock 00:03:48.974 LINK thread 00:03:48.974 LINK idxd_perf 00:03:49.235 CC test/nvme/sgl/sgl.o 00:03:49.235 CC test/nvme/e2edp/nvme_dp.o 00:03:49.235 CC test/nvme/aer/aer.o 00:03:49.235 LINK memory_ut 00:03:49.235 CC test/nvme/simple_copy/simple_copy.o 00:03:49.235 CC test/nvme/startup/startup.o 00:03:49.235 CC test/nvme/reserve/reserve.o 00:03:49.235 CC test/nvme/compliance/nvme_compliance.o 00:03:49.235 CC test/nvme/connect_stress/connect_stress.o 00:03:49.235 CC test/nvme/overhead/overhead.o 00:03:49.235 CC test/nvme/cuse/cuse.o 00:03:49.235 CC test/nvme/err_injection/err_injection.o 00:03:49.235 CC test/nvme/reset/reset.o 00:03:49.235 CC test/nvme/fused_ordering/fused_ordering.o 00:03:49.235 CC test/nvme/boot_partition/boot_partition.o 00:03:49.235 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:49.235 CC test/nvme/fdp/fdp.o 00:03:49.235 CC test/accel/dif/dif.o 00:03:49.235 CC test/blobfs/mkfs/mkfs.o 00:03:49.235 CC test/lvol/esnap/esnap.o 00:03:49.496 LINK err_injection 00:03:49.496 LINK connect_stress 00:03:49.496 LINK startup 00:03:49.496 LINK boot_partition 00:03:49.496 LINK reserve 00:03:49.496 LINK simple_copy 00:03:49.496 LINK fused_ordering 00:03:49.496 LINK doorbell_aers 00:03:49.496 LINK sgl 00:03:49.496 LINK nvme_dp 00:03:49.496 LINK reset 00:03:49.496 CC examples/nvme/hello_world/hello_world.o 00:03:49.496 LINK mkfs 00:03:49.496 CC examples/nvme/arbitration/arbitration.o 00:03:49.496 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:49.496 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:49.496 LINK aer 00:03:49.496 CC examples/nvme/abort/abort.o 00:03:49.496 CC examples/nvme/reconnect/reconnect.o 00:03:49.496 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:49.496 CC examples/nvme/hotplug/hotplug.o 00:03:49.496 LINK overhead 00:03:49.496 LINK nvme_compliance 00:03:49.496 LINK iscsi_fuzz 00:03:49.496 LINK fdp 00:03:49.496 CC examples/accel/perf/accel_perf.o 00:03:49.757 CC examples/blob/cli/blobcli.o 00:03:49.757 CC examples/blob/hello_world/hello_blob.o 00:03:49.757 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:49.757 LINK cmb_copy 00:03:49.757 LINK pmr_persistence 00:03:49.757 LINK hello_world 00:03:49.757 LINK hotplug 00:03:49.757 LINK dif 00:03:49.757 LINK arbitration 00:03:49.757 LINK reconnect 00:03:50.018 LINK abort 00:03:50.018 LINK hello_blob 00:03:50.018 LINK nvme_manage 00:03:50.018 LINK hello_fsdev 00:03:50.018 LINK accel_perf 00:03:50.279 LINK blobcli 00:03:50.279 LINK cuse 00:03:50.541 CC test/bdev/bdevio/bdevio.o 00:03:50.802 CC examples/bdev/hello_world/hello_bdev.o 00:03:50.802 CC examples/bdev/bdevperf/bdevperf.o 00:03:50.802 LINK bdevio 00:03:51.063 LINK hello_bdev 00:03:51.636 LINK bdevperf 00:03:52.208 CC examples/nvmf/nvmf/nvmf.o 00:03:52.470 LINK nvmf 00:03:53.859 LINK esnap 00:03:54.122 00:03:54.122 real 0m55.682s 00:03:54.122 user 8m1.941s 00:03:54.122 sys 5m25.294s 00:03:54.122 08:02:51 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:54.122 08:02:51 make -- common/autotest_common.sh@10 -- $ set +x 00:03:54.122 ************************************ 00:03:54.122 END TEST make 00:03:54.122 ************************************ 00:03:54.122 08:02:51 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:54.122 08:02:51 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:54.384 08:02:51 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:54.384 08:02:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:54.384 08:02:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:54.384 08:02:51 -- pm/common@44 -- $ pid=1642947 00:03:54.384 08:02:51 -- pm/common@50 -- $ kill -TERM 1642947 00:03:54.384 08:02:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:54.384 08:02:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:54.384 08:02:51 -- pm/common@44 -- $ pid=1642948 00:03:54.384 08:02:51 -- pm/common@50 -- $ kill -TERM 1642948 00:03:54.384 08:02:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:54.384 08:02:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:54.384 08:02:51 -- pm/common@44 -- $ pid=1642950 00:03:54.384 08:02:51 -- pm/common@50 -- $ kill -TERM 1642950 00:03:54.384 08:02:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:54.384 08:02:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:54.384 08:02:51 -- pm/common@44 -- $ pid=1642974 00:03:54.384 08:02:51 -- pm/common@50 -- $ sudo -E kill -TERM 1642974 00:03:54.384 08:02:51 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:54.384 08:02:51 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:54.384 08:02:51 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:54.384 08:02:51 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:54.384 08:02:51 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:54.384 08:02:51 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:54.384 08:02:51 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:54.384 08:02:51 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:54.384 08:02:51 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:54.384 08:02:51 -- scripts/common.sh@336 -- # IFS=.-: 00:03:54.384 08:02:51 -- scripts/common.sh@336 -- # read -ra ver1 00:03:54.384 08:02:51 -- scripts/common.sh@337 -- # IFS=.-: 00:03:54.384 08:02:51 -- scripts/common.sh@337 -- # read -ra ver2 00:03:54.384 08:02:51 -- scripts/common.sh@338 -- # local 'op=<' 00:03:54.384 08:02:51 -- scripts/common.sh@340 -- # ver1_l=2 00:03:54.384 08:02:51 -- scripts/common.sh@341 -- # ver2_l=1 00:03:54.384 08:02:51 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:54.384 08:02:51 -- scripts/common.sh@344 -- # case "$op" in 00:03:54.384 08:02:51 -- scripts/common.sh@345 -- # : 1 00:03:54.384 08:02:51 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:54.384 08:02:51 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:54.384 08:02:51 -- scripts/common.sh@365 -- # decimal 1 00:03:54.384 08:02:51 -- scripts/common.sh@353 -- # local d=1 00:03:54.384 08:02:51 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:54.384 08:02:51 -- scripts/common.sh@355 -- # echo 1 00:03:54.384 08:02:51 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:54.384 08:02:51 -- scripts/common.sh@366 -- # decimal 2 00:03:54.384 08:02:51 -- scripts/common.sh@353 -- # local d=2 00:03:54.384 08:02:51 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:54.384 08:02:51 -- scripts/common.sh@355 -- # echo 2 00:03:54.384 08:02:51 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:54.384 08:02:51 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:54.384 08:02:51 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:54.384 08:02:51 -- scripts/common.sh@368 -- # return 0 00:03:54.384 08:02:51 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:54.384 08:02:51 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:54.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.384 --rc genhtml_branch_coverage=1 00:03:54.384 --rc genhtml_function_coverage=1 00:03:54.384 --rc genhtml_legend=1 00:03:54.384 --rc geninfo_all_blocks=1 00:03:54.384 --rc geninfo_unexecuted_blocks=1 00:03:54.384 00:03:54.384 ' 00:03:54.384 08:02:51 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:54.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.384 --rc genhtml_branch_coverage=1 00:03:54.384 --rc genhtml_function_coverage=1 00:03:54.384 --rc genhtml_legend=1 00:03:54.384 --rc geninfo_all_blocks=1 00:03:54.384 --rc geninfo_unexecuted_blocks=1 00:03:54.384 00:03:54.384 ' 00:03:54.384 08:02:51 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:54.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.384 --rc genhtml_branch_coverage=1 00:03:54.384 --rc genhtml_function_coverage=1 00:03:54.384 --rc genhtml_legend=1 00:03:54.384 --rc geninfo_all_blocks=1 00:03:54.384 --rc geninfo_unexecuted_blocks=1 00:03:54.384 00:03:54.384 ' 00:03:54.384 08:02:51 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:54.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.384 --rc genhtml_branch_coverage=1 00:03:54.384 --rc genhtml_function_coverage=1 00:03:54.384 --rc genhtml_legend=1 00:03:54.384 --rc geninfo_all_blocks=1 00:03:54.384 --rc geninfo_unexecuted_blocks=1 00:03:54.384 00:03:54.384 ' 00:03:54.384 08:02:51 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:54.384 08:02:51 -- nvmf/common.sh@7 -- # uname -s 00:03:54.384 08:02:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:54.384 08:02:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:54.384 08:02:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:54.384 08:02:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:54.384 08:02:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:54.384 08:02:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:54.384 08:02:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:54.384 08:02:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:54.384 08:02:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:54.384 08:02:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:54.646 08:02:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:54.646 08:02:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:54.646 08:02:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:54.646 08:02:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:54.646 08:02:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:54.646 08:02:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:54.646 08:02:51 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:54.646 08:02:51 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:54.646 08:02:51 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:54.646 08:02:51 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:54.646 08:02:51 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:54.646 08:02:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.646 08:02:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.646 08:02:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.646 08:02:51 -- paths/export.sh@5 -- # export PATH 00:03:54.646 08:02:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:54.646 08:02:51 -- nvmf/common.sh@51 -- # : 0 00:03:54.646 08:02:51 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:54.646 08:02:51 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:54.646 08:02:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:54.646 08:02:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:54.646 08:02:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:54.646 08:02:51 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:54.646 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:54.646 08:02:51 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:54.646 08:02:51 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:54.646 08:02:51 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:54.646 08:02:51 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:54.646 08:02:51 -- spdk/autotest.sh@32 -- # uname -s 00:03:54.646 08:02:51 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:54.646 08:02:51 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:54.646 08:02:51 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:54.646 08:02:51 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:54.646 08:02:51 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:54.646 08:02:51 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:54.646 08:02:51 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:54.646 08:02:51 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:54.646 08:02:51 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:54.646 08:02:51 -- spdk/autotest.sh@48 -- # udevadm_pid=1708529 00:03:54.646 08:02:51 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:54.646 08:02:51 -- pm/common@17 -- # local monitor 00:03:54.646 08:02:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:54.646 08:02:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:54.646 08:02:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:54.646 08:02:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:54.646 08:02:51 -- pm/common@21 -- # date +%s 00:03:54.646 08:02:51 -- pm/common@21 -- # date +%s 00:03:54.646 08:02:51 -- pm/common@25 -- # sleep 1 00:03:54.646 08:02:51 -- pm/common@21 -- # date +%s 00:03:54.646 08:02:51 -- pm/common@21 -- # date +%s 00:03:54.646 08:02:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732777371 00:03:54.646 08:02:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732777371 00:03:54.646 08:02:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732777371 00:03:54.646 08:02:51 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732777371 00:03:54.646 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732777371_collect-cpu-load.pm.log 00:03:54.646 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732777371_collect-vmstat.pm.log 00:03:54.646 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732777371_collect-cpu-temp.pm.log 00:03:54.646 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732777371_collect-bmc-pm.bmc.pm.log 00:03:55.591 08:02:52 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:55.591 08:02:52 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:55.591 08:02:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:55.591 08:02:52 -- common/autotest_common.sh@10 -- # set +x 00:03:55.591 08:02:52 -- spdk/autotest.sh@59 -- # create_test_list 00:03:55.591 08:02:52 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:55.591 08:02:52 -- common/autotest_common.sh@10 -- # set +x 00:03:55.591 08:02:52 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:55.591 08:02:52 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:55.591 08:02:52 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:55.591 08:02:52 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:55.591 08:02:52 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:55.591 08:02:52 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:55.591 08:02:52 -- common/autotest_common.sh@1457 -- # uname 00:03:55.591 08:02:52 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:55.591 08:02:52 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:55.591 08:02:52 -- common/autotest_common.sh@1477 -- # uname 00:03:55.591 08:02:52 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:55.591 08:02:52 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:55.591 08:02:52 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:55.591 lcov: LCOV version 1.15 00:03:55.591 08:02:52 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:22.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:22.189 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:26.487 08:03:23 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:26.487 08:03:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:26.487 08:03:23 -- common/autotest_common.sh@10 -- # set +x 00:04:26.487 08:03:23 -- spdk/autotest.sh@78 -- # rm -f 00:04:26.487 08:03:23 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:29.794 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:04:29.794 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:04:29.794 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:04:29.794 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:04:29.794 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:04:29.794 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:04:29.794 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:04:29.794 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:04:30.055 0000:65:00.0 (144d a80a): Already using the nvme driver 00:04:30.055 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:04:30.055 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:04:30.055 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:04:30.055 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:04:30.055 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:04:30.055 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:04:30.055 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:04:30.055 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:04:30.317 08:03:27 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:30.317 08:03:27 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:30.317 08:03:27 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:30.317 08:03:27 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:30.317 08:03:27 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:30.317 08:03:27 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:30.317 08:03:27 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:30.317 08:03:27 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:30.317 08:03:27 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:30.317 08:03:27 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:30.317 08:03:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:30.317 08:03:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:30.317 08:03:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:30.317 08:03:27 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:30.317 08:03:27 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:30.578 No valid GPT data, bailing 00:04:30.578 08:03:27 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:30.578 08:03:27 -- scripts/common.sh@394 -- # pt= 00:04:30.578 08:03:27 -- scripts/common.sh@395 -- # return 1 00:04:30.578 08:03:27 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:30.578 1+0 records in 00:04:30.578 1+0 records out 00:04:30.578 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00148364 s, 707 MB/s 00:04:30.578 08:03:27 -- spdk/autotest.sh@105 -- # sync 00:04:30.578 08:03:27 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:30.578 08:03:27 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:30.578 08:03:27 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:40.586 08:03:36 -- spdk/autotest.sh@111 -- # uname -s 00:04:40.586 08:03:36 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:40.586 08:03:36 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:40.586 08:03:36 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:42.499 Hugepages 00:04:42.499 node hugesize free / total 00:04:42.499 node0 1048576kB 0 / 0 00:04:42.499 node0 2048kB 0 / 0 00:04:42.499 node1 1048576kB 0 / 0 00:04:42.499 node1 2048kB 0 / 0 00:04:42.499 00:04:42.499 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:42.499 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:42.499 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:42.499 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:42.499 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:42.499 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:42.499 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:42.499 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:42.499 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:42.761 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:42.761 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:42.761 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:42.761 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:42.761 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:42.761 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:42.761 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:42.761 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:42.761 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:42.761 08:03:39 -- spdk/autotest.sh@117 -- # uname -s 00:04:42.761 08:03:39 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:42.761 08:03:39 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:42.761 08:03:39 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:46.971 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:46.971 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:46.971 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:46.971 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:46.971 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:46.971 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:46.971 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:46.971 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:46.971 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:46.971 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:46.971 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:46.971 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:46.971 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:46.971 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:46.971 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:46.971 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:48.357 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:48.618 08:03:45 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:49.560 08:03:46 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:49.560 08:03:46 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:49.560 08:03:46 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:49.560 08:03:46 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:49.560 08:03:46 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:49.560 08:03:46 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:49.560 08:03:46 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:49.560 08:03:46 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:49.560 08:03:46 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:49.560 08:03:46 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:49.560 08:03:46 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:04:49.560 08:03:46 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:52.863 Waiting for block devices as requested 00:04:53.124 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:53.124 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:53.124 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:53.384 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:53.384 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:53.384 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:53.645 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:53.645 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:53.645 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:53.906 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:53.906 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:54.167 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:54.167 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:54.167 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:54.428 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:54.428 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:54.428 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:55.000 08:03:51 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:55.000 08:03:51 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:55.000 08:03:51 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:55.000 08:03:51 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:04:55.000 08:03:52 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:55.000 08:03:52 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:55.000 08:03:52 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:55.000 08:03:52 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:55.000 08:03:52 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:55.000 08:03:52 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:55.000 08:03:52 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:55.000 08:03:52 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:55.000 08:03:52 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:55.000 08:03:52 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:04:55.000 08:03:52 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:55.000 08:03:52 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:55.000 08:03:52 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:55.000 08:03:52 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:55.000 08:03:52 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:55.000 08:03:52 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:55.000 08:03:52 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:55.000 08:03:52 -- common/autotest_common.sh@1543 -- # continue 00:04:55.000 08:03:52 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:55.000 08:03:52 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:55.000 08:03:52 -- common/autotest_common.sh@10 -- # set +x 00:04:55.000 08:03:52 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:55.000 08:03:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:55.000 08:03:52 -- common/autotest_common.sh@10 -- # set +x 00:04:55.000 08:03:52 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:58.303 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:58.303 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:58.303 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:58.303 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:58.563 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:58.563 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:58.563 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:58.563 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:58.563 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:58.563 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:58.563 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:58.563 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:58.563 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:58.563 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:58.563 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:58.563 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:58.563 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:59.137 08:03:56 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:59.137 08:03:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:59.137 08:03:56 -- common/autotest_common.sh@10 -- # set +x 00:04:59.137 08:03:56 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:59.137 08:03:56 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:59.137 08:03:56 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:59.137 08:03:56 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:59.137 08:03:56 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:59.137 08:03:56 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:59.137 08:03:56 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:59.137 08:03:56 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:59.137 08:03:56 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:59.137 08:03:56 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:59.137 08:03:56 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:59.137 08:03:56 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:59.137 08:03:56 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:59.137 08:03:56 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:59.137 08:03:56 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:04:59.137 08:03:56 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:59.137 08:03:56 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:59.137 08:03:56 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:04:59.137 08:03:56 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:59.137 08:03:56 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:59.137 08:03:56 -- common/autotest_common.sh@1572 -- # return 0 00:04:59.137 08:03:56 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:59.137 08:03:56 -- common/autotest_common.sh@1580 -- # return 0 00:04:59.137 08:03:56 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:59.137 08:03:56 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:59.137 08:03:56 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:59.137 08:03:56 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:59.137 08:03:56 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:59.137 08:03:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:59.137 08:03:56 -- common/autotest_common.sh@10 -- # set +x 00:04:59.137 08:03:56 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:59.137 08:03:56 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:59.137 08:03:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.137 08:03:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.137 08:03:56 -- common/autotest_common.sh@10 -- # set +x 00:04:59.137 ************************************ 00:04:59.137 START TEST env 00:04:59.137 ************************************ 00:04:59.137 08:03:56 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:59.398 * Looking for test storage... 00:04:59.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:59.398 08:03:56 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:59.398 08:03:56 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:59.398 08:03:56 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:59.398 08:03:56 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:59.398 08:03:56 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.398 08:03:56 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.398 08:03:56 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.398 08:03:56 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.398 08:03:56 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.398 08:03:56 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.398 08:03:56 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.398 08:03:56 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.398 08:03:56 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.398 08:03:56 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.398 08:03:56 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.398 08:03:56 env -- scripts/common.sh@344 -- # case "$op" in 00:04:59.398 08:03:56 env -- scripts/common.sh@345 -- # : 1 00:04:59.398 08:03:56 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.398 08:03:56 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.398 08:03:56 env -- scripts/common.sh@365 -- # decimal 1 00:04:59.398 08:03:56 env -- scripts/common.sh@353 -- # local d=1 00:04:59.398 08:03:56 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.398 08:03:56 env -- scripts/common.sh@355 -- # echo 1 00:04:59.398 08:03:56 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.398 08:03:56 env -- scripts/common.sh@366 -- # decimal 2 00:04:59.398 08:03:56 env -- scripts/common.sh@353 -- # local d=2 00:04:59.398 08:03:56 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.398 08:03:56 env -- scripts/common.sh@355 -- # echo 2 00:04:59.398 08:03:56 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.398 08:03:56 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.398 08:03:56 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.398 08:03:56 env -- scripts/common.sh@368 -- # return 0 00:04:59.398 08:03:56 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.398 08:03:56 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:59.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.399 --rc genhtml_branch_coverage=1 00:04:59.399 --rc genhtml_function_coverage=1 00:04:59.399 --rc genhtml_legend=1 00:04:59.399 --rc geninfo_all_blocks=1 00:04:59.399 --rc geninfo_unexecuted_blocks=1 00:04:59.399 00:04:59.399 ' 00:04:59.399 08:03:56 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:59.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.399 --rc genhtml_branch_coverage=1 00:04:59.399 --rc genhtml_function_coverage=1 00:04:59.399 --rc genhtml_legend=1 00:04:59.399 --rc geninfo_all_blocks=1 00:04:59.399 --rc geninfo_unexecuted_blocks=1 00:04:59.399 00:04:59.399 ' 00:04:59.399 08:03:56 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:59.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.399 --rc genhtml_branch_coverage=1 00:04:59.399 --rc genhtml_function_coverage=1 00:04:59.399 --rc genhtml_legend=1 00:04:59.399 --rc geninfo_all_blocks=1 00:04:59.399 --rc geninfo_unexecuted_blocks=1 00:04:59.399 00:04:59.399 ' 00:04:59.399 08:03:56 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:59.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.399 --rc genhtml_branch_coverage=1 00:04:59.399 --rc genhtml_function_coverage=1 00:04:59.399 --rc genhtml_legend=1 00:04:59.399 --rc geninfo_all_blocks=1 00:04:59.399 --rc geninfo_unexecuted_blocks=1 00:04:59.399 00:04:59.399 ' 00:04:59.399 08:03:56 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:59.399 08:03:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.399 08:03:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.399 08:03:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.399 ************************************ 00:04:59.399 START TEST env_memory 00:04:59.399 ************************************ 00:04:59.399 08:03:56 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:59.399 00:04:59.399 00:04:59.399 CUnit - A unit testing framework for C - Version 2.1-3 00:04:59.399 http://cunit.sourceforge.net/ 00:04:59.399 00:04:59.399 00:04:59.399 Suite: memory 00:04:59.399 Test: alloc and free memory map ...[2024-11-28 08:03:56.661920] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:59.399 passed 00:04:59.661 Test: mem map translation ...[2024-11-28 08:03:56.687593] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:59.661 [2024-11-28 08:03:56.687622] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:59.661 [2024-11-28 08:03:56.687672] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:59.661 [2024-11-28 08:03:56.687680] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:59.661 passed 00:04:59.661 Test: mem map registration ...[2024-11-28 08:03:56.742847] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:59.661 [2024-11-28 08:03:56.742869] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:59.661 passed 00:04:59.661 Test: mem map adjacent registrations ...passed 00:04:59.661 00:04:59.661 Run Summary: Type Total Ran Passed Failed Inactive 00:04:59.661 suites 1 1 n/a 0 0 00:04:59.661 tests 4 4 4 0 0 00:04:59.661 asserts 152 152 152 0 n/a 00:04:59.661 00:04:59.661 Elapsed time = 0.192 seconds 00:04:59.661 00:04:59.661 real 0m0.207s 00:04:59.661 user 0m0.197s 00:04:59.661 sys 0m0.010s 00:04:59.661 08:03:56 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.661 08:03:56 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:59.661 ************************************ 00:04:59.661 END TEST env_memory 00:04:59.661 ************************************ 00:04:59.661 08:03:56 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:59.661 08:03:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.661 08:03:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.661 08:03:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.661 ************************************ 00:04:59.661 START TEST env_vtophys 00:04:59.661 ************************************ 00:04:59.661 08:03:56 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:59.661 EAL: lib.eal log level changed from notice to debug 00:04:59.661 EAL: Detected lcore 0 as core 0 on socket 0 00:04:59.661 EAL: Detected lcore 1 as core 1 on socket 0 00:04:59.661 EAL: Detected lcore 2 as core 2 on socket 0 00:04:59.661 EAL: Detected lcore 3 as core 3 on socket 0 00:04:59.661 EAL: Detected lcore 4 as core 4 on socket 0 00:04:59.661 EAL: Detected lcore 5 as core 5 on socket 0 00:04:59.661 EAL: Detected lcore 6 as core 6 on socket 0 00:04:59.661 EAL: Detected lcore 7 as core 7 on socket 0 00:04:59.661 EAL: Detected lcore 8 as core 8 on socket 0 00:04:59.661 EAL: Detected lcore 9 as core 9 on socket 0 00:04:59.661 EAL: Detected lcore 10 as core 10 on socket 0 00:04:59.661 EAL: Detected lcore 11 as core 11 on socket 0 00:04:59.661 EAL: Detected lcore 12 as core 12 on socket 0 00:04:59.661 EAL: Detected lcore 13 as core 13 on socket 0 00:04:59.661 EAL: Detected lcore 14 as core 14 on socket 0 00:04:59.661 EAL: Detected lcore 15 as core 15 on socket 0 00:04:59.661 EAL: Detected lcore 16 as core 16 on socket 0 00:04:59.661 EAL: Detected lcore 17 as core 17 on socket 0 00:04:59.661 EAL: Detected lcore 18 as core 18 on socket 0 00:04:59.661 EAL: Detected lcore 19 as core 19 on socket 0 00:04:59.661 EAL: Detected lcore 20 as core 20 on socket 0 00:04:59.661 EAL: Detected lcore 21 as core 21 on socket 0 00:04:59.661 EAL: Detected lcore 22 as core 22 on socket 0 00:04:59.661 EAL: Detected lcore 23 as core 23 on socket 0 00:04:59.661 EAL: Detected lcore 24 as core 24 on socket 0 00:04:59.661 EAL: Detected lcore 25 as core 25 on socket 0 00:04:59.661 EAL: Detected lcore 26 as core 26 on socket 0 00:04:59.661 EAL: Detected lcore 27 as core 27 on socket 0 00:04:59.661 EAL: Detected lcore 28 as core 28 on socket 0 00:04:59.661 EAL: Detected lcore 29 as core 29 on socket 0 00:04:59.661 EAL: Detected lcore 30 as core 30 on socket 0 00:04:59.661 EAL: Detected lcore 31 as core 31 on socket 0 00:04:59.661 EAL: Detected lcore 32 as core 32 on socket 0 00:04:59.661 EAL: Detected lcore 33 as core 33 on socket 0 00:04:59.661 EAL: Detected lcore 34 as core 34 on socket 0 00:04:59.661 EAL: Detected lcore 35 as core 35 on socket 0 00:04:59.661 EAL: Detected lcore 36 as core 0 on socket 1 00:04:59.661 EAL: Detected lcore 37 as core 1 on socket 1 00:04:59.661 EAL: Detected lcore 38 as core 2 on socket 1 00:04:59.661 EAL: Detected lcore 39 as core 3 on socket 1 00:04:59.661 EAL: Detected lcore 40 as core 4 on socket 1 00:04:59.661 EAL: Detected lcore 41 as core 5 on socket 1 00:04:59.661 EAL: Detected lcore 42 as core 6 on socket 1 00:04:59.661 EAL: Detected lcore 43 as core 7 on socket 1 00:04:59.661 EAL: Detected lcore 44 as core 8 on socket 1 00:04:59.661 EAL: Detected lcore 45 as core 9 on socket 1 00:04:59.661 EAL: Detected lcore 46 as core 10 on socket 1 00:04:59.661 EAL: Detected lcore 47 as core 11 on socket 1 00:04:59.661 EAL: Detected lcore 48 as core 12 on socket 1 00:04:59.661 EAL: Detected lcore 49 as core 13 on socket 1 00:04:59.661 EAL: Detected lcore 50 as core 14 on socket 1 00:04:59.661 EAL: Detected lcore 51 as core 15 on socket 1 00:04:59.661 EAL: Detected lcore 52 as core 16 on socket 1 00:04:59.661 EAL: Detected lcore 53 as core 17 on socket 1 00:04:59.661 EAL: Detected lcore 54 as core 18 on socket 1 00:04:59.661 EAL: Detected lcore 55 as core 19 on socket 1 00:04:59.661 EAL: Detected lcore 56 as core 20 on socket 1 00:04:59.661 EAL: Detected lcore 57 as core 21 on socket 1 00:04:59.661 EAL: Detected lcore 58 as core 22 on socket 1 00:04:59.661 EAL: Detected lcore 59 as core 23 on socket 1 00:04:59.661 EAL: Detected lcore 60 as core 24 on socket 1 00:04:59.661 EAL: Detected lcore 61 as core 25 on socket 1 00:04:59.661 EAL: Detected lcore 62 as core 26 on socket 1 00:04:59.661 EAL: Detected lcore 63 as core 27 on socket 1 00:04:59.661 EAL: Detected lcore 64 as core 28 on socket 1 00:04:59.661 EAL: Detected lcore 65 as core 29 on socket 1 00:04:59.661 EAL: Detected lcore 66 as core 30 on socket 1 00:04:59.661 EAL: Detected lcore 67 as core 31 on socket 1 00:04:59.661 EAL: Detected lcore 68 as core 32 on socket 1 00:04:59.661 EAL: Detected lcore 69 as core 33 on socket 1 00:04:59.661 EAL: Detected lcore 70 as core 34 on socket 1 00:04:59.661 EAL: Detected lcore 71 as core 35 on socket 1 00:04:59.661 EAL: Detected lcore 72 as core 0 on socket 0 00:04:59.661 EAL: Detected lcore 73 as core 1 on socket 0 00:04:59.661 EAL: Detected lcore 74 as core 2 on socket 0 00:04:59.661 EAL: Detected lcore 75 as core 3 on socket 0 00:04:59.661 EAL: Detected lcore 76 as core 4 on socket 0 00:04:59.661 EAL: Detected lcore 77 as core 5 on socket 0 00:04:59.661 EAL: Detected lcore 78 as core 6 on socket 0 00:04:59.661 EAL: Detected lcore 79 as core 7 on socket 0 00:04:59.661 EAL: Detected lcore 80 as core 8 on socket 0 00:04:59.661 EAL: Detected lcore 81 as core 9 on socket 0 00:04:59.661 EAL: Detected lcore 82 as core 10 on socket 0 00:04:59.661 EAL: Detected lcore 83 as core 11 on socket 0 00:04:59.661 EAL: Detected lcore 84 as core 12 on socket 0 00:04:59.661 EAL: Detected lcore 85 as core 13 on socket 0 00:04:59.661 EAL: Detected lcore 86 as core 14 on socket 0 00:04:59.661 EAL: Detected lcore 87 as core 15 on socket 0 00:04:59.661 EAL: Detected lcore 88 as core 16 on socket 0 00:04:59.661 EAL: Detected lcore 89 as core 17 on socket 0 00:04:59.661 EAL: Detected lcore 90 as core 18 on socket 0 00:04:59.661 EAL: Detected lcore 91 as core 19 on socket 0 00:04:59.661 EAL: Detected lcore 92 as core 20 on socket 0 00:04:59.661 EAL: Detected lcore 93 as core 21 on socket 0 00:04:59.661 EAL: Detected lcore 94 as core 22 on socket 0 00:04:59.661 EAL: Detected lcore 95 as core 23 on socket 0 00:04:59.661 EAL: Detected lcore 96 as core 24 on socket 0 00:04:59.661 EAL: Detected lcore 97 as core 25 on socket 0 00:04:59.661 EAL: Detected lcore 98 as core 26 on socket 0 00:04:59.661 EAL: Detected lcore 99 as core 27 on socket 0 00:04:59.661 EAL: Detected lcore 100 as core 28 on socket 0 00:04:59.662 EAL: Detected lcore 101 as core 29 on socket 0 00:04:59.662 EAL: Detected lcore 102 as core 30 on socket 0 00:04:59.662 EAL: Detected lcore 103 as core 31 on socket 0 00:04:59.662 EAL: Detected lcore 104 as core 32 on socket 0 00:04:59.662 EAL: Detected lcore 105 as core 33 on socket 0 00:04:59.662 EAL: Detected lcore 106 as core 34 on socket 0 00:04:59.662 EAL: Detected lcore 107 as core 35 on socket 0 00:04:59.662 EAL: Detected lcore 108 as core 0 on socket 1 00:04:59.662 EAL: Detected lcore 109 as core 1 on socket 1 00:04:59.662 EAL: Detected lcore 110 as core 2 on socket 1 00:04:59.662 EAL: Detected lcore 111 as core 3 on socket 1 00:04:59.662 EAL: Detected lcore 112 as core 4 on socket 1 00:04:59.662 EAL: Detected lcore 113 as core 5 on socket 1 00:04:59.662 EAL: Detected lcore 114 as core 6 on socket 1 00:04:59.662 EAL: Detected lcore 115 as core 7 on socket 1 00:04:59.662 EAL: Detected lcore 116 as core 8 on socket 1 00:04:59.662 EAL: Detected lcore 117 as core 9 on socket 1 00:04:59.662 EAL: Detected lcore 118 as core 10 on socket 1 00:04:59.662 EAL: Detected lcore 119 as core 11 on socket 1 00:04:59.662 EAL: Detected lcore 120 as core 12 on socket 1 00:04:59.662 EAL: Detected lcore 121 as core 13 on socket 1 00:04:59.662 EAL: Detected lcore 122 as core 14 on socket 1 00:04:59.662 EAL: Detected lcore 123 as core 15 on socket 1 00:04:59.662 EAL: Detected lcore 124 as core 16 on socket 1 00:04:59.662 EAL: Detected lcore 125 as core 17 on socket 1 00:04:59.662 EAL: Detected lcore 126 as core 18 on socket 1 00:04:59.662 EAL: Detected lcore 127 as core 19 on socket 1 00:04:59.662 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:59.662 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:59.662 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:59.662 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:59.662 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:59.662 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:59.662 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:59.662 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:59.662 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:59.662 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:59.662 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:59.662 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:59.662 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:59.662 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:59.662 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:59.662 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:59.662 EAL: Maximum logical cores by configuration: 128 00:04:59.662 EAL: Detected CPU lcores: 128 00:04:59.662 EAL: Detected NUMA nodes: 2 00:04:59.662 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:59.662 EAL: Detected shared linkage of DPDK 00:04:59.662 EAL: No shared files mode enabled, IPC will be disabled 00:04:59.923 EAL: Bus pci wants IOVA as 'DC' 00:04:59.923 EAL: Buses did not request a specific IOVA mode. 00:04:59.923 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:59.923 EAL: Selected IOVA mode 'VA' 00:04:59.923 EAL: Probing VFIO support... 00:04:59.923 EAL: IOMMU type 1 (Type 1) is supported 00:04:59.923 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:59.923 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:59.923 EAL: VFIO support initialized 00:04:59.923 EAL: Ask a virtual area of 0x2e000 bytes 00:04:59.923 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:59.923 EAL: Setting up physically contiguous memory... 00:04:59.923 EAL: Setting maximum number of open files to 524288 00:04:59.923 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:59.923 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:59.923 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:59.923 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.923 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:59.923 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.923 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.923 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:59.923 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:59.923 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.923 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:59.923 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.923 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.923 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:59.923 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:59.923 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.923 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:59.923 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.923 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.923 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:59.923 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:59.923 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.923 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:59.923 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.923 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.923 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:59.923 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:59.923 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:59.923 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.923 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:59.923 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:59.923 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.923 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:59.923 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:59.923 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.923 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:59.923 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:59.923 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.923 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:59.923 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:59.923 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.923 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:59.923 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:59.923 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.923 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:59.923 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:59.923 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.923 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:59.923 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:59.923 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.923 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:59.923 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:59.923 EAL: Hugepages will be freed exactly as allocated. 00:04:59.923 EAL: No shared files mode enabled, IPC is disabled 00:04:59.924 EAL: No shared files mode enabled, IPC is disabled 00:04:59.924 EAL: TSC frequency is ~2400000 KHz 00:04:59.924 EAL: Main lcore 0 is ready (tid=7f6859de1a00;cpuset=[0]) 00:04:59.924 EAL: Trying to obtain current memory policy. 00:04:59.924 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.924 EAL: Restoring previous memory policy: 0 00:04:59.924 EAL: request: mp_malloc_sync 00:04:59.924 EAL: No shared files mode enabled, IPC is disabled 00:04:59.924 EAL: Heap on socket 0 was expanded by 2MB 00:04:59.924 EAL: No shared files mode enabled, IPC is disabled 00:04:59.924 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:59.924 EAL: Mem event callback 'spdk:(nil)' registered 00:04:59.924 00:04:59.924 00:04:59.924 CUnit - A unit testing framework for C - Version 2.1-3 00:04:59.924 http://cunit.sourceforge.net/ 00:04:59.924 00:04:59.924 00:04:59.924 Suite: components_suite 00:04:59.924 Test: vtophys_malloc_test ...passed 00:04:59.924 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:59.924 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.924 EAL: Restoring previous memory policy: 4 00:04:59.924 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.924 EAL: request: mp_malloc_sync 00:04:59.924 EAL: No shared files mode enabled, IPC is disabled 00:04:59.924 EAL: Heap on socket 0 was expanded by 4MB 00:04:59.924 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.924 EAL: request: mp_malloc_sync 00:04:59.924 EAL: No shared files mode enabled, IPC is disabled 00:04:59.924 EAL: Heap on socket 0 was shrunk by 4MB 00:04:59.924 EAL: Trying to obtain current memory policy. 00:04:59.924 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.924 EAL: Restoring previous memory policy: 4 00:04:59.924 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.924 EAL: request: mp_malloc_sync 00:04:59.924 EAL: No shared files mode enabled, IPC is disabled 00:04:59.924 EAL: Heap on socket 0 was expanded by 6MB 00:04:59.924 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.924 EAL: request: mp_malloc_sync 00:04:59.924 EAL: No shared files mode enabled, IPC is disabled 00:04:59.924 EAL: Heap on socket 0 was shrunk by 6MB 00:04:59.924 EAL: Trying to obtain current memory policy. 00:04:59.924 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.924 EAL: Restoring previous memory policy: 4 00:04:59.924 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.924 EAL: request: mp_malloc_sync 00:04:59.924 EAL: No shared files mode enabled, IPC is disabled 00:04:59.924 EAL: Heap on socket 0 was expanded by 10MB 00:04:59.924 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.924 EAL: request: mp_malloc_sync 00:04:59.924 EAL: No shared files mode enabled, IPC is disabled 00:04:59.924 EAL: Heap on socket 0 was shrunk by 10MB 00:04:59.924 EAL: Trying to obtain current memory policy. 00:04:59.924 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.924 EAL: Restoring previous memory policy: 4 00:04:59.924 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.924 EAL: request: mp_malloc_sync 00:04:59.924 EAL: No shared files mode enabled, IPC is disabled 00:04:59.924 EAL: Heap on socket 0 was expanded by 18MB 00:04:59.924 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.924 EAL: request: mp_malloc_sync 00:04:59.924 EAL: No shared files mode enabled, IPC is disabled 00:04:59.924 EAL: Heap on socket 0 was shrunk by 18MB 00:04:59.924 EAL: Trying to obtain current memory policy. 00:04:59.924 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.924 EAL: Restoring previous memory policy: 4 00:04:59.924 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.924 EAL: request: mp_malloc_sync 00:04:59.924 EAL: No shared files mode enabled, IPC is disabled 00:04:59.924 EAL: Heap on socket 0 was expanded by 34MB 00:04:59.924 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.924 EAL: request: mp_malloc_sync 00:04:59.924 EAL: No shared files mode enabled, IPC is disabled 00:04:59.924 EAL: Heap on socket 0 was shrunk by 34MB 00:04:59.924 EAL: Trying to obtain current memory policy. 00:04:59.924 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.924 EAL: Restoring previous memory policy: 4 00:04:59.924 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.924 EAL: request: mp_malloc_sync 00:04:59.924 EAL: No shared files mode enabled, IPC is disabled 00:04:59.924 EAL: Heap on socket 0 was expanded by 66MB 00:04:59.924 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.924 EAL: request: mp_malloc_sync 00:04:59.924 EAL: No shared files mode enabled, IPC is disabled 00:04:59.924 EAL: Heap on socket 0 was shrunk by 66MB 00:04:59.924 EAL: Trying to obtain current memory policy. 00:04:59.924 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.924 EAL: Restoring previous memory policy: 4 00:04:59.924 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.924 EAL: request: mp_malloc_sync 00:04:59.924 EAL: No shared files mode enabled, IPC is disabled 00:04:59.924 EAL: Heap on socket 0 was expanded by 130MB 00:04:59.924 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.924 EAL: request: mp_malloc_sync 00:04:59.924 EAL: No shared files mode enabled, IPC is disabled 00:04:59.924 EAL: Heap on socket 0 was shrunk by 130MB 00:04:59.924 EAL: Trying to obtain current memory policy. 00:04:59.924 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.924 EAL: Restoring previous memory policy: 4 00:04:59.924 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.924 EAL: request: mp_malloc_sync 00:04:59.924 EAL: No shared files mode enabled, IPC is disabled 00:04:59.924 EAL: Heap on socket 0 was expanded by 258MB 00:04:59.924 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.184 EAL: request: mp_malloc_sync 00:05:00.184 EAL: No shared files mode enabled, IPC is disabled 00:05:00.184 EAL: Heap on socket 0 was shrunk by 258MB 00:05:00.184 EAL: Trying to obtain current memory policy. 00:05:00.184 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.184 EAL: Restoring previous memory policy: 4 00:05:00.184 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.184 EAL: request: mp_malloc_sync 00:05:00.184 EAL: No shared files mode enabled, IPC is disabled 00:05:00.184 EAL: Heap on socket 0 was expanded by 514MB 00:05:00.184 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.184 EAL: request: mp_malloc_sync 00:05:00.184 EAL: No shared files mode enabled, IPC is disabled 00:05:00.184 EAL: Heap on socket 0 was shrunk by 514MB 00:05:00.184 EAL: Trying to obtain current memory policy. 00:05:00.184 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.444 EAL: Restoring previous memory policy: 4 00:05:00.444 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.444 EAL: request: mp_malloc_sync 00:05:00.444 EAL: No shared files mode enabled, IPC is disabled 00:05:00.444 EAL: Heap on socket 0 was expanded by 1026MB 00:05:00.444 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.706 EAL: request: mp_malloc_sync 00:05:00.706 EAL: No shared files mode enabled, IPC is disabled 00:05:00.706 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:00.706 passed 00:05:00.706 00:05:00.706 Run Summary: Type Total Ran Passed Failed Inactive 00:05:00.706 suites 1 1 n/a 0 0 00:05:00.706 tests 2 2 2 0 0 00:05:00.706 asserts 497 497 497 0 n/a 00:05:00.706 00:05:00.706 Elapsed time = 0.687 seconds 00:05:00.706 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.706 EAL: request: mp_malloc_sync 00:05:00.706 EAL: No shared files mode enabled, IPC is disabled 00:05:00.706 EAL: Heap on socket 0 was shrunk by 2MB 00:05:00.706 EAL: No shared files mode enabled, IPC is disabled 00:05:00.706 EAL: No shared files mode enabled, IPC is disabled 00:05:00.706 EAL: No shared files mode enabled, IPC is disabled 00:05:00.706 00:05:00.706 real 0m0.846s 00:05:00.706 user 0m0.435s 00:05:00.706 sys 0m0.385s 00:05:00.706 08:03:57 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.706 08:03:57 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:00.706 ************************************ 00:05:00.706 END TEST env_vtophys 00:05:00.706 ************************************ 00:05:00.706 08:03:57 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:00.706 08:03:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.706 08:03:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.706 08:03:57 env -- common/autotest_common.sh@10 -- # set +x 00:05:00.706 ************************************ 00:05:00.706 START TEST env_pci 00:05:00.706 ************************************ 00:05:00.706 08:03:57 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:00.706 00:05:00.706 00:05:00.706 CUnit - A unit testing framework for C - Version 2.1-3 00:05:00.706 http://cunit.sourceforge.net/ 00:05:00.706 00:05:00.706 00:05:00.706 Suite: pci 00:05:00.706 Test: pci_hook ...[2024-11-28 08:03:57.845805] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1728514 has claimed it 00:05:00.706 EAL: Cannot find device (10000:00:01.0) 00:05:00.706 EAL: Failed to attach device on primary process 00:05:00.706 passed 00:05:00.706 00:05:00.706 Run Summary: Type Total Ran Passed Failed Inactive 00:05:00.706 suites 1 1 n/a 0 0 00:05:00.706 tests 1 1 1 0 0 00:05:00.706 asserts 25 25 25 0 n/a 00:05:00.706 00:05:00.706 Elapsed time = 0.031 seconds 00:05:00.706 00:05:00.706 real 0m0.053s 00:05:00.706 user 0m0.014s 00:05:00.706 sys 0m0.038s 00:05:00.706 08:03:57 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.706 08:03:57 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:00.706 ************************************ 00:05:00.706 END TEST env_pci 00:05:00.706 ************************************ 00:05:00.706 08:03:57 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:00.706 08:03:57 env -- env/env.sh@15 -- # uname 00:05:00.706 08:03:57 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:00.706 08:03:57 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:00.706 08:03:57 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:00.706 08:03:57 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:00.706 08:03:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.706 08:03:57 env -- common/autotest_common.sh@10 -- # set +x 00:05:00.706 ************************************ 00:05:00.706 START TEST env_dpdk_post_init 00:05:00.706 ************************************ 00:05:00.706 08:03:57 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:00.967 EAL: Detected CPU lcores: 128 00:05:00.967 EAL: Detected NUMA nodes: 2 00:05:00.967 EAL: Detected shared linkage of DPDK 00:05:00.967 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:00.967 EAL: Selected IOVA mode 'VA' 00:05:00.967 EAL: VFIO support initialized 00:05:00.967 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:00.967 EAL: Using IOMMU type 1 (Type 1) 00:05:00.967 EAL: Ignore mapping IO port bar(1) 00:05:01.228 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:01.228 EAL: Ignore mapping IO port bar(1) 00:05:01.489 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:01.489 EAL: Ignore mapping IO port bar(1) 00:05:01.750 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:01.750 EAL: Ignore mapping IO port bar(1) 00:05:01.750 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:02.010 EAL: Ignore mapping IO port bar(1) 00:05:02.010 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:02.271 EAL: Ignore mapping IO port bar(1) 00:05:02.271 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:02.532 EAL: Ignore mapping IO port bar(1) 00:05:02.532 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:02.532 EAL: Ignore mapping IO port bar(1) 00:05:02.792 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:03.053 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:03.053 EAL: Ignore mapping IO port bar(1) 00:05:03.314 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:03.314 EAL: Ignore mapping IO port bar(1) 00:05:03.314 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:03.575 EAL: Ignore mapping IO port bar(1) 00:05:03.575 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:03.835 EAL: Ignore mapping IO port bar(1) 00:05:03.835 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:04.097 EAL: Ignore mapping IO port bar(1) 00:05:04.097 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:04.097 EAL: Ignore mapping IO port bar(1) 00:05:04.358 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:04.358 EAL: Ignore mapping IO port bar(1) 00:05:04.619 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:04.619 EAL: Ignore mapping IO port bar(1) 00:05:04.880 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:04.880 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:04.880 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:04.880 Starting DPDK initialization... 00:05:04.880 Starting SPDK post initialization... 00:05:04.880 SPDK NVMe probe 00:05:04.880 Attaching to 0000:65:00.0 00:05:04.880 Attached to 0000:65:00.0 00:05:04.880 Cleaning up... 00:05:06.797 00:05:06.797 real 0m5.742s 00:05:06.797 user 0m0.113s 00:05:06.797 sys 0m0.187s 00:05:06.797 08:04:03 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.797 08:04:03 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:06.797 ************************************ 00:05:06.797 END TEST env_dpdk_post_init 00:05:06.797 ************************************ 00:05:06.797 08:04:03 env -- env/env.sh@26 -- # uname 00:05:06.797 08:04:03 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:06.797 08:04:03 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:06.797 08:04:03 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.797 08:04:03 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.797 08:04:03 env -- common/autotest_common.sh@10 -- # set +x 00:05:06.797 ************************************ 00:05:06.797 START TEST env_mem_callbacks 00:05:06.797 ************************************ 00:05:06.797 08:04:03 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:06.797 EAL: Detected CPU lcores: 128 00:05:06.797 EAL: Detected NUMA nodes: 2 00:05:06.797 EAL: Detected shared linkage of DPDK 00:05:06.797 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:06.797 EAL: Selected IOVA mode 'VA' 00:05:06.797 EAL: VFIO support initialized 00:05:06.797 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:06.797 00:05:06.797 00:05:06.797 CUnit - A unit testing framework for C - Version 2.1-3 00:05:06.797 http://cunit.sourceforge.net/ 00:05:06.797 00:05:06.797 00:05:06.797 Suite: memory 00:05:06.797 Test: test ... 00:05:06.797 register 0x200000200000 2097152 00:05:06.797 malloc 3145728 00:05:06.797 register 0x200000400000 4194304 00:05:06.797 buf 0x200000500000 len 3145728 PASSED 00:05:06.797 malloc 64 00:05:06.797 buf 0x2000004fff40 len 64 PASSED 00:05:06.797 malloc 4194304 00:05:06.797 register 0x200000800000 6291456 00:05:06.797 buf 0x200000a00000 len 4194304 PASSED 00:05:06.797 free 0x200000500000 3145728 00:05:06.797 free 0x2000004fff40 64 00:05:06.797 unregister 0x200000400000 4194304 PASSED 00:05:06.797 free 0x200000a00000 4194304 00:05:06.797 unregister 0x200000800000 6291456 PASSED 00:05:06.797 malloc 8388608 00:05:06.797 register 0x200000400000 10485760 00:05:06.797 buf 0x200000600000 len 8388608 PASSED 00:05:06.797 free 0x200000600000 8388608 00:05:06.797 unregister 0x200000400000 10485760 PASSED 00:05:06.797 passed 00:05:06.797 00:05:06.797 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.797 suites 1 1 n/a 0 0 00:05:06.797 tests 1 1 1 0 0 00:05:06.797 asserts 15 15 15 0 n/a 00:05:06.797 00:05:06.797 Elapsed time = 0.010 seconds 00:05:06.797 00:05:06.797 real 0m0.070s 00:05:06.797 user 0m0.024s 00:05:06.797 sys 0m0.046s 00:05:06.797 08:04:03 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.797 08:04:03 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:06.797 ************************************ 00:05:06.797 END TEST env_mem_callbacks 00:05:06.797 ************************************ 00:05:06.797 00:05:06.797 real 0m7.553s 00:05:06.797 user 0m1.048s 00:05:06.797 sys 0m1.070s 00:05:06.797 08:04:03 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.797 08:04:03 env -- common/autotest_common.sh@10 -- # set +x 00:05:06.797 ************************************ 00:05:06.797 END TEST env 00:05:06.797 ************************************ 00:05:06.797 08:04:03 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:06.797 08:04:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.797 08:04:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.797 08:04:03 -- common/autotest_common.sh@10 -- # set +x 00:05:06.797 ************************************ 00:05:06.797 START TEST rpc 00:05:06.797 ************************************ 00:05:06.797 08:04:03 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:07.060 * Looking for test storage... 00:05:07.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:07.060 08:04:04 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:07.060 08:04:04 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:07.060 08:04:04 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:07.060 08:04:04 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:07.060 08:04:04 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.060 08:04:04 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.060 08:04:04 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.060 08:04:04 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.060 08:04:04 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.060 08:04:04 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.060 08:04:04 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.060 08:04:04 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.060 08:04:04 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.060 08:04:04 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.060 08:04:04 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.060 08:04:04 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:07.060 08:04:04 rpc -- scripts/common.sh@345 -- # : 1 00:05:07.060 08:04:04 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.060 08:04:04 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.060 08:04:04 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:07.060 08:04:04 rpc -- scripts/common.sh@353 -- # local d=1 00:05:07.060 08:04:04 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.060 08:04:04 rpc -- scripts/common.sh@355 -- # echo 1 00:05:07.060 08:04:04 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.060 08:04:04 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:07.060 08:04:04 rpc -- scripts/common.sh@353 -- # local d=2 00:05:07.060 08:04:04 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.060 08:04:04 rpc -- scripts/common.sh@355 -- # echo 2 00:05:07.060 08:04:04 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.060 08:04:04 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.060 08:04:04 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.060 08:04:04 rpc -- scripts/common.sh@368 -- # return 0 00:05:07.060 08:04:04 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.060 08:04:04 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:07.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.060 --rc genhtml_branch_coverage=1 00:05:07.060 --rc genhtml_function_coverage=1 00:05:07.060 --rc genhtml_legend=1 00:05:07.060 --rc geninfo_all_blocks=1 00:05:07.060 --rc geninfo_unexecuted_blocks=1 00:05:07.060 00:05:07.060 ' 00:05:07.060 08:04:04 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:07.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.060 --rc genhtml_branch_coverage=1 00:05:07.060 --rc genhtml_function_coverage=1 00:05:07.060 --rc genhtml_legend=1 00:05:07.060 --rc geninfo_all_blocks=1 00:05:07.060 --rc geninfo_unexecuted_blocks=1 00:05:07.060 00:05:07.060 ' 00:05:07.060 08:04:04 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:07.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.060 --rc genhtml_branch_coverage=1 00:05:07.060 --rc genhtml_function_coverage=1 00:05:07.060 --rc genhtml_legend=1 00:05:07.060 --rc geninfo_all_blocks=1 00:05:07.060 --rc geninfo_unexecuted_blocks=1 00:05:07.060 00:05:07.060 ' 00:05:07.060 08:04:04 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:07.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.060 --rc genhtml_branch_coverage=1 00:05:07.060 --rc genhtml_function_coverage=1 00:05:07.060 --rc genhtml_legend=1 00:05:07.060 --rc geninfo_all_blocks=1 00:05:07.060 --rc geninfo_unexecuted_blocks=1 00:05:07.060 00:05:07.060 ' 00:05:07.060 08:04:04 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1729831 00:05:07.060 08:04:04 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:07.060 08:04:04 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1729831 00:05:07.060 08:04:04 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:07.060 08:04:04 rpc -- common/autotest_common.sh@835 -- # '[' -z 1729831 ']' 00:05:07.060 08:04:04 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.060 08:04:04 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.060 08:04:04 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.060 08:04:04 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.060 08:04:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.060 [2024-11-28 08:04:04.265707] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:05:07.060 [2024-11-28 08:04:04.265774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1729831 ] 00:05:07.060 [2024-11-28 08:04:04.334440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.322 [2024-11-28 08:04:04.380956] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:07.322 [2024-11-28 08:04:04.381011] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1729831' to capture a snapshot of events at runtime. 00:05:07.322 [2024-11-28 08:04:04.381019] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:07.322 [2024-11-28 08:04:04.381024] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:07.322 [2024-11-28 08:04:04.381029] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1729831 for offline analysis/debug. 00:05:07.322 [2024-11-28 08:04:04.381743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.584 08:04:04 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.584 08:04:04 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:07.584 08:04:04 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:07.584 08:04:04 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:07.584 08:04:04 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:07.584 08:04:04 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:07.584 08:04:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.584 08:04:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.584 08:04:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.584 ************************************ 00:05:07.584 START TEST rpc_integrity 00:05:07.584 ************************************ 00:05:07.584 08:04:04 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:07.584 08:04:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:07.584 08:04:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.584 08:04:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.584 08:04:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.584 08:04:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:07.584 08:04:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:07.584 08:04:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:07.585 08:04:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:07.585 08:04:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.585 08:04:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.585 08:04:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.585 08:04:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:07.585 08:04:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:07.585 08:04:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.585 08:04:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.585 08:04:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.585 08:04:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:07.585 { 00:05:07.585 "name": "Malloc0", 00:05:07.585 "aliases": [ 00:05:07.585 "2124b7bd-922c-4195-8fe8-2c1a101e171a" 00:05:07.585 ], 00:05:07.585 "product_name": "Malloc disk", 00:05:07.585 "block_size": 512, 00:05:07.585 "num_blocks": 16384, 00:05:07.585 "uuid": "2124b7bd-922c-4195-8fe8-2c1a101e171a", 00:05:07.585 "assigned_rate_limits": { 00:05:07.585 "rw_ios_per_sec": 0, 00:05:07.585 "rw_mbytes_per_sec": 0, 00:05:07.585 "r_mbytes_per_sec": 0, 00:05:07.585 "w_mbytes_per_sec": 0 00:05:07.585 }, 00:05:07.585 "claimed": false, 00:05:07.585 "zoned": false, 00:05:07.585 "supported_io_types": { 00:05:07.585 "read": true, 00:05:07.585 "write": true, 00:05:07.585 "unmap": true, 00:05:07.585 "flush": true, 00:05:07.585 "reset": true, 00:05:07.585 "nvme_admin": false, 00:05:07.585 "nvme_io": false, 00:05:07.585 "nvme_io_md": false, 00:05:07.585 "write_zeroes": true, 00:05:07.585 "zcopy": true, 00:05:07.585 "get_zone_info": false, 00:05:07.585 "zone_management": false, 00:05:07.585 "zone_append": false, 00:05:07.585 "compare": false, 00:05:07.585 "compare_and_write": false, 00:05:07.585 "abort": true, 00:05:07.585 "seek_hole": false, 00:05:07.585 "seek_data": false, 00:05:07.585 "copy": true, 00:05:07.585 "nvme_iov_md": false 00:05:07.585 }, 00:05:07.585 "memory_domains": [ 00:05:07.585 { 00:05:07.585 "dma_device_id": "system", 00:05:07.585 "dma_device_type": 1 00:05:07.585 }, 00:05:07.585 { 00:05:07.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.585 "dma_device_type": 2 00:05:07.585 } 00:05:07.585 ], 00:05:07.585 "driver_specific": {} 00:05:07.585 } 00:05:07.585 ]' 00:05:07.585 08:04:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:07.585 08:04:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:07.585 08:04:04 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:07.585 08:04:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.585 08:04:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.585 [2024-11-28 08:04:04.806857] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:07.585 [2024-11-28 08:04:04.806907] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:07.585 [2024-11-28 08:04:04.806925] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21f2800 00:05:07.585 [2024-11-28 08:04:04.806933] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:07.585 [2024-11-28 08:04:04.808548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:07.585 [2024-11-28 08:04:04.808588] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:07.585 Passthru0 00:05:07.585 08:04:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.585 08:04:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:07.585 08:04:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.585 08:04:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.585 08:04:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.585 08:04:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:07.585 { 00:05:07.585 "name": "Malloc0", 00:05:07.585 "aliases": [ 00:05:07.585 "2124b7bd-922c-4195-8fe8-2c1a101e171a" 00:05:07.585 ], 00:05:07.585 "product_name": "Malloc disk", 00:05:07.585 "block_size": 512, 00:05:07.585 "num_blocks": 16384, 00:05:07.585 "uuid": "2124b7bd-922c-4195-8fe8-2c1a101e171a", 00:05:07.585 "assigned_rate_limits": { 00:05:07.585 "rw_ios_per_sec": 0, 00:05:07.585 "rw_mbytes_per_sec": 0, 00:05:07.585 "r_mbytes_per_sec": 0, 00:05:07.585 "w_mbytes_per_sec": 0 00:05:07.585 }, 00:05:07.585 "claimed": true, 00:05:07.585 "claim_type": "exclusive_write", 00:05:07.585 "zoned": false, 00:05:07.585 "supported_io_types": { 00:05:07.585 "read": true, 00:05:07.585 "write": true, 00:05:07.585 "unmap": true, 00:05:07.585 "flush": true, 00:05:07.585 "reset": true, 00:05:07.585 "nvme_admin": false, 00:05:07.585 "nvme_io": false, 00:05:07.585 "nvme_io_md": false, 00:05:07.585 "write_zeroes": true, 00:05:07.585 "zcopy": true, 00:05:07.585 "get_zone_info": false, 00:05:07.585 "zone_management": false, 00:05:07.585 "zone_append": false, 00:05:07.585 "compare": false, 00:05:07.585 "compare_and_write": false, 00:05:07.585 "abort": true, 00:05:07.585 "seek_hole": false, 00:05:07.585 "seek_data": false, 00:05:07.585 "copy": true, 00:05:07.585 "nvme_iov_md": false 00:05:07.585 }, 00:05:07.585 "memory_domains": [ 00:05:07.585 { 00:05:07.585 "dma_device_id": "system", 00:05:07.585 "dma_device_type": 1 00:05:07.585 }, 00:05:07.585 { 00:05:07.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.585 "dma_device_type": 2 00:05:07.585 } 00:05:07.585 ], 00:05:07.585 "driver_specific": {} 00:05:07.585 }, 00:05:07.585 { 00:05:07.585 "name": "Passthru0", 00:05:07.585 "aliases": [ 00:05:07.585 "cfee92b6-8041-53a1-9a42-d4261e8554a7" 00:05:07.585 ], 00:05:07.585 "product_name": "passthru", 00:05:07.585 "block_size": 512, 00:05:07.585 "num_blocks": 16384, 00:05:07.585 "uuid": "cfee92b6-8041-53a1-9a42-d4261e8554a7", 00:05:07.585 "assigned_rate_limits": { 00:05:07.585 "rw_ios_per_sec": 0, 00:05:07.585 "rw_mbytes_per_sec": 0, 00:05:07.585 "r_mbytes_per_sec": 0, 00:05:07.585 "w_mbytes_per_sec": 0 00:05:07.585 }, 00:05:07.585 "claimed": false, 00:05:07.585 "zoned": false, 00:05:07.585 "supported_io_types": { 00:05:07.585 "read": true, 00:05:07.585 "write": true, 00:05:07.585 "unmap": true, 00:05:07.585 "flush": true, 00:05:07.585 "reset": true, 00:05:07.585 "nvme_admin": false, 00:05:07.585 "nvme_io": false, 00:05:07.585 "nvme_io_md": false, 00:05:07.585 "write_zeroes": true, 00:05:07.585 "zcopy": true, 00:05:07.585 "get_zone_info": false, 00:05:07.585 "zone_management": false, 00:05:07.585 "zone_append": false, 00:05:07.585 "compare": false, 00:05:07.585 "compare_and_write": false, 00:05:07.585 "abort": true, 00:05:07.585 "seek_hole": false, 00:05:07.585 "seek_data": false, 00:05:07.585 "copy": true, 00:05:07.585 "nvme_iov_md": false 00:05:07.585 }, 00:05:07.585 "memory_domains": [ 00:05:07.585 { 00:05:07.585 "dma_device_id": "system", 00:05:07.585 "dma_device_type": 1 00:05:07.585 }, 00:05:07.585 { 00:05:07.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.585 "dma_device_type": 2 00:05:07.585 } 00:05:07.585 ], 00:05:07.585 "driver_specific": { 00:05:07.585 "passthru": { 00:05:07.585 "name": "Passthru0", 00:05:07.585 "base_bdev_name": "Malloc0" 00:05:07.585 } 00:05:07.585 } 00:05:07.585 } 00:05:07.585 ]' 00:05:07.585 08:04:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:07.856 08:04:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:07.856 08:04:04 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:07.856 08:04:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.856 08:04:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.856 08:04:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.856 08:04:04 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:07.856 08:04:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.856 08:04:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.856 08:04:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.856 08:04:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:07.856 08:04:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.856 08:04:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.856 08:04:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.856 08:04:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:07.856 08:04:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:07.856 08:04:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:07.856 00:05:07.856 real 0m0.311s 00:05:07.856 user 0m0.190s 00:05:07.856 sys 0m0.048s 00:05:07.856 08:04:04 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.856 08:04:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.856 ************************************ 00:05:07.856 END TEST rpc_integrity 00:05:07.856 ************************************ 00:05:07.856 08:04:05 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:07.856 08:04:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.856 08:04:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.856 08:04:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.856 ************************************ 00:05:07.856 START TEST rpc_plugins 00:05:07.856 ************************************ 00:05:07.856 08:04:05 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:07.856 08:04:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:07.856 08:04:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.856 08:04:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:07.856 08:04:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.856 08:04:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:07.856 08:04:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:07.856 08:04:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.856 08:04:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:07.856 08:04:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.856 08:04:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:07.856 { 00:05:07.856 "name": "Malloc1", 00:05:07.856 "aliases": [ 00:05:07.856 "be0b47f2-b365-46a0-a554-a7d688ab7c47" 00:05:07.856 ], 00:05:07.856 "product_name": "Malloc disk", 00:05:07.856 "block_size": 4096, 00:05:07.856 "num_blocks": 256, 00:05:07.856 "uuid": "be0b47f2-b365-46a0-a554-a7d688ab7c47", 00:05:07.856 "assigned_rate_limits": { 00:05:07.856 "rw_ios_per_sec": 0, 00:05:07.856 "rw_mbytes_per_sec": 0, 00:05:07.856 "r_mbytes_per_sec": 0, 00:05:07.856 "w_mbytes_per_sec": 0 00:05:07.856 }, 00:05:07.856 "claimed": false, 00:05:07.856 "zoned": false, 00:05:07.856 "supported_io_types": { 00:05:07.856 "read": true, 00:05:07.856 "write": true, 00:05:07.856 "unmap": true, 00:05:07.856 "flush": true, 00:05:07.856 "reset": true, 00:05:07.856 "nvme_admin": false, 00:05:07.856 "nvme_io": false, 00:05:07.856 "nvme_io_md": false, 00:05:07.856 "write_zeroes": true, 00:05:07.856 "zcopy": true, 00:05:07.856 "get_zone_info": false, 00:05:07.856 "zone_management": false, 00:05:07.856 "zone_append": false, 00:05:07.856 "compare": false, 00:05:07.856 "compare_and_write": false, 00:05:07.856 "abort": true, 00:05:07.856 "seek_hole": false, 00:05:07.856 "seek_data": false, 00:05:07.856 "copy": true, 00:05:07.856 "nvme_iov_md": false 00:05:07.856 }, 00:05:07.856 "memory_domains": [ 00:05:07.856 { 00:05:07.856 "dma_device_id": "system", 00:05:07.856 "dma_device_type": 1 00:05:07.856 }, 00:05:07.856 { 00:05:07.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.856 "dma_device_type": 2 00:05:07.856 } 00:05:07.856 ], 00:05:07.856 "driver_specific": {} 00:05:07.856 } 00:05:07.856 ]' 00:05:07.856 08:04:05 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:07.856 08:04:05 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:07.856 08:04:05 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:07.856 08:04:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.856 08:04:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:07.857 08:04:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.857 08:04:05 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:07.857 08:04:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.857 08:04:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:08.148 08:04:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.148 08:04:05 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:08.148 08:04:05 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:08.148 08:04:05 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:08.148 00:05:08.148 real 0m0.153s 00:05:08.148 user 0m0.086s 00:05:08.148 sys 0m0.030s 00:05:08.148 08:04:05 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.148 08:04:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:08.148 ************************************ 00:05:08.148 END TEST rpc_plugins 00:05:08.148 ************************************ 00:05:08.148 08:04:05 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:08.148 08:04:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.148 08:04:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.148 08:04:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.148 ************************************ 00:05:08.148 START TEST rpc_trace_cmd_test 00:05:08.148 ************************************ 00:05:08.148 08:04:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:08.148 08:04:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:08.148 08:04:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:08.148 08:04:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.148 08:04:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:08.148 08:04:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.148 08:04:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:08.148 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1729831", 00:05:08.148 "tpoint_group_mask": "0x8", 00:05:08.148 "iscsi_conn": { 00:05:08.148 "mask": "0x2", 00:05:08.148 "tpoint_mask": "0x0" 00:05:08.148 }, 00:05:08.148 "scsi": { 00:05:08.148 "mask": "0x4", 00:05:08.148 "tpoint_mask": "0x0" 00:05:08.149 }, 00:05:08.149 "bdev": { 00:05:08.149 "mask": "0x8", 00:05:08.149 "tpoint_mask": "0xffffffffffffffff" 00:05:08.149 }, 00:05:08.149 "nvmf_rdma": { 00:05:08.149 "mask": "0x10", 00:05:08.149 "tpoint_mask": "0x0" 00:05:08.149 }, 00:05:08.149 "nvmf_tcp": { 00:05:08.149 "mask": "0x20", 00:05:08.149 "tpoint_mask": "0x0" 00:05:08.149 }, 00:05:08.149 "ftl": { 00:05:08.149 "mask": "0x40", 00:05:08.149 "tpoint_mask": "0x0" 00:05:08.149 }, 00:05:08.149 "blobfs": { 00:05:08.149 "mask": "0x80", 00:05:08.149 "tpoint_mask": "0x0" 00:05:08.149 }, 00:05:08.149 "dsa": { 00:05:08.149 "mask": "0x200", 00:05:08.149 "tpoint_mask": "0x0" 00:05:08.149 }, 00:05:08.149 "thread": { 00:05:08.149 "mask": "0x400", 00:05:08.149 "tpoint_mask": "0x0" 00:05:08.149 }, 00:05:08.149 "nvme_pcie": { 00:05:08.149 "mask": "0x800", 00:05:08.149 "tpoint_mask": "0x0" 00:05:08.149 }, 00:05:08.149 "iaa": { 00:05:08.149 "mask": "0x1000", 00:05:08.149 "tpoint_mask": "0x0" 00:05:08.149 }, 00:05:08.149 "nvme_tcp": { 00:05:08.149 "mask": "0x2000", 00:05:08.149 "tpoint_mask": "0x0" 00:05:08.149 }, 00:05:08.149 "bdev_nvme": { 00:05:08.149 "mask": "0x4000", 00:05:08.149 "tpoint_mask": "0x0" 00:05:08.149 }, 00:05:08.149 "sock": { 00:05:08.149 "mask": "0x8000", 00:05:08.149 "tpoint_mask": "0x0" 00:05:08.149 }, 00:05:08.149 "blob": { 00:05:08.149 "mask": "0x10000", 00:05:08.149 "tpoint_mask": "0x0" 00:05:08.149 }, 00:05:08.149 "bdev_raid": { 00:05:08.149 "mask": "0x20000", 00:05:08.149 "tpoint_mask": "0x0" 00:05:08.149 }, 00:05:08.149 "scheduler": { 00:05:08.149 "mask": "0x40000", 00:05:08.149 "tpoint_mask": "0x0" 00:05:08.149 } 00:05:08.149 }' 00:05:08.149 08:04:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:08.149 08:04:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:08.149 08:04:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:08.149 08:04:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:08.149 08:04:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:08.445 08:04:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:08.445 08:04:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:08.445 08:04:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:08.445 08:04:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:08.445 08:04:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:08.445 00:05:08.445 real 0m0.255s 00:05:08.445 user 0m0.216s 00:05:08.445 sys 0m0.031s 00:05:08.445 08:04:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.445 08:04:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:08.445 ************************************ 00:05:08.445 END TEST rpc_trace_cmd_test 00:05:08.445 ************************************ 00:05:08.445 08:04:05 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:08.445 08:04:05 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:08.445 08:04:05 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:08.445 08:04:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.445 08:04:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.445 08:04:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.445 ************************************ 00:05:08.445 START TEST rpc_daemon_integrity 00:05:08.445 ************************************ 00:05:08.445 08:04:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:08.445 08:04:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:08.445 08:04:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.445 08:04:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.445 08:04:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.445 08:04:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:08.445 08:04:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:08.445 08:04:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:08.445 08:04:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:08.445 08:04:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.445 08:04:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.445 08:04:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.445 08:04:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:08.445 08:04:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:08.445 08:04:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.445 08:04:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.445 08:04:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.445 08:04:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:08.445 { 00:05:08.445 "name": "Malloc2", 00:05:08.445 "aliases": [ 00:05:08.445 "0094fd3d-037c-4a64-9852-0fd0f113d3f6" 00:05:08.445 ], 00:05:08.445 "product_name": "Malloc disk", 00:05:08.445 "block_size": 512, 00:05:08.445 "num_blocks": 16384, 00:05:08.445 "uuid": "0094fd3d-037c-4a64-9852-0fd0f113d3f6", 00:05:08.445 "assigned_rate_limits": { 00:05:08.445 "rw_ios_per_sec": 0, 00:05:08.445 "rw_mbytes_per_sec": 0, 00:05:08.445 "r_mbytes_per_sec": 0, 00:05:08.445 "w_mbytes_per_sec": 0 00:05:08.445 }, 00:05:08.445 "claimed": false, 00:05:08.445 "zoned": false, 00:05:08.445 "supported_io_types": { 00:05:08.445 "read": true, 00:05:08.445 "write": true, 00:05:08.445 "unmap": true, 00:05:08.445 "flush": true, 00:05:08.445 "reset": true, 00:05:08.445 "nvme_admin": false, 00:05:08.445 "nvme_io": false, 00:05:08.445 "nvme_io_md": false, 00:05:08.445 "write_zeroes": true, 00:05:08.445 "zcopy": true, 00:05:08.445 "get_zone_info": false, 00:05:08.445 "zone_management": false, 00:05:08.445 "zone_append": false, 00:05:08.445 "compare": false, 00:05:08.445 "compare_and_write": false, 00:05:08.445 "abort": true, 00:05:08.445 "seek_hole": false, 00:05:08.445 "seek_data": false, 00:05:08.445 "copy": true, 00:05:08.445 "nvme_iov_md": false 00:05:08.445 }, 00:05:08.445 "memory_domains": [ 00:05:08.445 { 00:05:08.445 "dma_device_id": "system", 00:05:08.445 "dma_device_type": 1 00:05:08.445 }, 00:05:08.445 { 00:05:08.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:08.445 "dma_device_type": 2 00:05:08.445 } 00:05:08.445 ], 00:05:08.445 "driver_specific": {} 00:05:08.445 } 00:05:08.445 ]' 00:05:08.445 08:04:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:08.708 08:04:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:08.708 08:04:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:08.708 08:04:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.708 08:04:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.708 [2024-11-28 08:04:05.761436] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:08.708 [2024-11-28 08:04:05.761480] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:08.708 [2024-11-28 08:04:05.761497] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20aefe0 00:05:08.708 [2024-11-28 08:04:05.761505] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:08.708 [2024-11-28 08:04:05.763003] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:08.708 [2024-11-28 08:04:05.763041] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:08.708 Passthru0 00:05:08.708 08:04:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.708 08:04:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:08.708 08:04:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.708 08:04:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.708 08:04:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.708 08:04:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:08.708 { 00:05:08.708 "name": "Malloc2", 00:05:08.708 "aliases": [ 00:05:08.708 "0094fd3d-037c-4a64-9852-0fd0f113d3f6" 00:05:08.708 ], 00:05:08.708 "product_name": "Malloc disk", 00:05:08.708 "block_size": 512, 00:05:08.708 "num_blocks": 16384, 00:05:08.708 "uuid": "0094fd3d-037c-4a64-9852-0fd0f113d3f6", 00:05:08.708 "assigned_rate_limits": { 00:05:08.708 "rw_ios_per_sec": 0, 00:05:08.708 "rw_mbytes_per_sec": 0, 00:05:08.708 "r_mbytes_per_sec": 0, 00:05:08.708 "w_mbytes_per_sec": 0 00:05:08.708 }, 00:05:08.708 "claimed": true, 00:05:08.708 "claim_type": "exclusive_write", 00:05:08.708 "zoned": false, 00:05:08.708 "supported_io_types": { 00:05:08.708 "read": true, 00:05:08.708 "write": true, 00:05:08.708 "unmap": true, 00:05:08.708 "flush": true, 00:05:08.708 "reset": true, 00:05:08.708 "nvme_admin": false, 00:05:08.708 "nvme_io": false, 00:05:08.708 "nvme_io_md": false, 00:05:08.708 "write_zeroes": true, 00:05:08.709 "zcopy": true, 00:05:08.709 "get_zone_info": false, 00:05:08.709 "zone_management": false, 00:05:08.709 "zone_append": false, 00:05:08.709 "compare": false, 00:05:08.709 "compare_and_write": false, 00:05:08.709 "abort": true, 00:05:08.709 "seek_hole": false, 00:05:08.709 "seek_data": false, 00:05:08.709 "copy": true, 00:05:08.709 "nvme_iov_md": false 00:05:08.709 }, 00:05:08.709 "memory_domains": [ 00:05:08.709 { 00:05:08.709 "dma_device_id": "system", 00:05:08.709 "dma_device_type": 1 00:05:08.709 }, 00:05:08.709 { 00:05:08.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:08.709 "dma_device_type": 2 00:05:08.709 } 00:05:08.709 ], 00:05:08.709 "driver_specific": {} 00:05:08.709 }, 00:05:08.709 { 00:05:08.709 "name": "Passthru0", 00:05:08.709 "aliases": [ 00:05:08.709 "edfa7604-c722-5f7d-8344-ed575ab900ee" 00:05:08.709 ], 00:05:08.709 "product_name": "passthru", 00:05:08.709 "block_size": 512, 00:05:08.709 "num_blocks": 16384, 00:05:08.709 "uuid": "edfa7604-c722-5f7d-8344-ed575ab900ee", 00:05:08.709 "assigned_rate_limits": { 00:05:08.709 "rw_ios_per_sec": 0, 00:05:08.709 "rw_mbytes_per_sec": 0, 00:05:08.709 "r_mbytes_per_sec": 0, 00:05:08.709 "w_mbytes_per_sec": 0 00:05:08.709 }, 00:05:08.709 "claimed": false, 00:05:08.709 "zoned": false, 00:05:08.709 "supported_io_types": { 00:05:08.709 "read": true, 00:05:08.709 "write": true, 00:05:08.709 "unmap": true, 00:05:08.709 "flush": true, 00:05:08.709 "reset": true, 00:05:08.709 "nvme_admin": false, 00:05:08.709 "nvme_io": false, 00:05:08.709 "nvme_io_md": false, 00:05:08.709 "write_zeroes": true, 00:05:08.709 "zcopy": true, 00:05:08.709 "get_zone_info": false, 00:05:08.709 "zone_management": false, 00:05:08.709 "zone_append": false, 00:05:08.709 "compare": false, 00:05:08.709 "compare_and_write": false, 00:05:08.709 "abort": true, 00:05:08.709 "seek_hole": false, 00:05:08.709 "seek_data": false, 00:05:08.709 "copy": true, 00:05:08.709 "nvme_iov_md": false 00:05:08.709 }, 00:05:08.709 "memory_domains": [ 00:05:08.709 { 00:05:08.709 "dma_device_id": "system", 00:05:08.709 "dma_device_type": 1 00:05:08.709 }, 00:05:08.709 { 00:05:08.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:08.709 "dma_device_type": 2 00:05:08.709 } 00:05:08.709 ], 00:05:08.709 "driver_specific": { 00:05:08.709 "passthru": { 00:05:08.709 "name": "Passthru0", 00:05:08.709 "base_bdev_name": "Malloc2" 00:05:08.709 } 00:05:08.709 } 00:05:08.709 } 00:05:08.709 ]' 00:05:08.709 08:04:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:08.709 08:04:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:08.709 08:04:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:08.709 08:04:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.709 08:04:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.709 08:04:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.709 08:04:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:08.709 08:04:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.709 08:04:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.709 08:04:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.709 08:04:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:08.709 08:04:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.709 08:04:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.709 08:04:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.709 08:04:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:08.709 08:04:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:08.709 08:04:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:08.709 00:05:08.709 real 0m0.304s 00:05:08.709 user 0m0.188s 00:05:08.709 sys 0m0.048s 00:05:08.709 08:04:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.709 08:04:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.709 ************************************ 00:05:08.709 END TEST rpc_daemon_integrity 00:05:08.709 ************************************ 00:05:08.709 08:04:05 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:08.709 08:04:05 rpc -- rpc/rpc.sh@84 -- # killprocess 1729831 00:05:08.709 08:04:05 rpc -- common/autotest_common.sh@954 -- # '[' -z 1729831 ']' 00:05:08.709 08:04:05 rpc -- common/autotest_common.sh@958 -- # kill -0 1729831 00:05:08.709 08:04:05 rpc -- common/autotest_common.sh@959 -- # uname 00:05:08.709 08:04:05 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.709 08:04:05 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1729831 00:05:08.970 08:04:06 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:08.970 08:04:06 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:08.970 08:04:06 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1729831' 00:05:08.970 killing process with pid 1729831 00:05:08.970 08:04:06 rpc -- common/autotest_common.sh@973 -- # kill 1729831 00:05:08.970 08:04:06 rpc -- common/autotest_common.sh@978 -- # wait 1729831 00:05:09.231 00:05:09.231 real 0m2.269s 00:05:09.231 user 0m2.937s 00:05:09.231 sys 0m0.801s 00:05:09.231 08:04:06 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.231 08:04:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.231 ************************************ 00:05:09.231 END TEST rpc 00:05:09.231 ************************************ 00:05:09.231 08:04:06 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:09.231 08:04:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.231 08:04:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.231 08:04:06 -- common/autotest_common.sh@10 -- # set +x 00:05:09.231 ************************************ 00:05:09.231 START TEST skip_rpc 00:05:09.231 ************************************ 00:05:09.231 08:04:06 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:09.231 * Looking for test storage... 00:05:09.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:09.231 08:04:06 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:09.231 08:04:06 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:09.231 08:04:06 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:09.493 08:04:06 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:09.493 08:04:06 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.493 08:04:06 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.493 08:04:06 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.493 08:04:06 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.493 08:04:06 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.493 08:04:06 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.493 08:04:06 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.493 08:04:06 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.493 08:04:06 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.493 08:04:06 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.493 08:04:06 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.493 08:04:06 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:09.493 08:04:06 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:09.493 08:04:06 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.493 08:04:06 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.493 08:04:06 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:09.493 08:04:06 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:09.493 08:04:06 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.493 08:04:06 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:09.493 08:04:06 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.493 08:04:06 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:09.493 08:04:06 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:09.493 08:04:06 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.493 08:04:06 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:09.493 08:04:06 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.493 08:04:06 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.493 08:04:06 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.493 08:04:06 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:09.493 08:04:06 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.493 08:04:06 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:09.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.493 --rc genhtml_branch_coverage=1 00:05:09.493 --rc genhtml_function_coverage=1 00:05:09.493 --rc genhtml_legend=1 00:05:09.493 --rc geninfo_all_blocks=1 00:05:09.493 --rc geninfo_unexecuted_blocks=1 00:05:09.493 00:05:09.493 ' 00:05:09.493 08:04:06 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:09.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.493 --rc genhtml_branch_coverage=1 00:05:09.493 --rc genhtml_function_coverage=1 00:05:09.493 --rc genhtml_legend=1 00:05:09.493 --rc geninfo_all_blocks=1 00:05:09.493 --rc geninfo_unexecuted_blocks=1 00:05:09.493 00:05:09.493 ' 00:05:09.493 08:04:06 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:09.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.493 --rc genhtml_branch_coverage=1 00:05:09.493 --rc genhtml_function_coverage=1 00:05:09.493 --rc genhtml_legend=1 00:05:09.493 --rc geninfo_all_blocks=1 00:05:09.493 --rc geninfo_unexecuted_blocks=1 00:05:09.493 00:05:09.493 ' 00:05:09.493 08:04:06 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:09.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.493 --rc genhtml_branch_coverage=1 00:05:09.493 --rc genhtml_function_coverage=1 00:05:09.493 --rc genhtml_legend=1 00:05:09.493 --rc geninfo_all_blocks=1 00:05:09.493 --rc geninfo_unexecuted_blocks=1 00:05:09.493 00:05:09.493 ' 00:05:09.493 08:04:06 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:09.493 08:04:06 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:09.493 08:04:06 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:09.493 08:04:06 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.493 08:04:06 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.493 08:04:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.493 ************************************ 00:05:09.493 START TEST skip_rpc 00:05:09.493 ************************************ 00:05:09.493 08:04:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:09.493 08:04:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1730561 00:05:09.493 08:04:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.493 08:04:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:09.493 08:04:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:09.493 [2024-11-28 08:04:06.659083] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:05:09.493 [2024-11-28 08:04:06.659148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1730561 ] 00:05:09.493 [2024-11-28 08:04:06.749482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.754 [2024-11-28 08:04:06.802889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.045 08:04:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:15.045 08:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:15.045 08:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:15.045 08:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:15.045 08:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:15.045 08:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:15.045 08:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:15.045 08:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:15.045 08:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.045 08:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.045 08:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:15.045 08:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:15.045 08:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:15.045 08:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:15.045 08:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:15.045 08:04:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:15.045 08:04:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1730561 00:05:15.045 08:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1730561 ']' 00:05:15.045 08:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1730561 00:05:15.045 08:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:15.045 08:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.045 08:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1730561 00:05:15.045 08:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.045 08:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.045 08:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1730561' 00:05:15.045 killing process with pid 1730561 00:05:15.045 08:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1730561 00:05:15.045 08:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1730561 00:05:15.045 00:05:15.045 real 0m5.267s 00:05:15.045 user 0m5.013s 00:05:15.045 sys 0m0.302s 00:05:15.045 08:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.045 08:04:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.045 ************************************ 00:05:15.045 END TEST skip_rpc 00:05:15.045 ************************************ 00:05:15.045 08:04:11 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:15.045 08:04:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.045 08:04:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.045 08:04:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.045 ************************************ 00:05:15.045 START TEST skip_rpc_with_json 00:05:15.045 ************************************ 00:05:15.045 08:04:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:15.045 08:04:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:15.045 08:04:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1731719 00:05:15.045 08:04:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.045 08:04:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1731719 00:05:15.045 08:04:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:15.045 08:04:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1731719 ']' 00:05:15.045 08:04:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.045 08:04:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.045 08:04:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.045 08:04:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.045 08:04:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:15.045 [2024-11-28 08:04:12.004196] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:05:15.045 [2024-11-28 08:04:12.004246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1731719 ] 00:05:15.045 [2024-11-28 08:04:12.087096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.045 [2024-11-28 08:04:12.118069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.615 08:04:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.615 08:04:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:15.615 08:04:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:15.615 08:04:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.615 08:04:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:15.615 [2024-11-28 08:04:12.786344] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:15.615 request: 00:05:15.615 { 00:05:15.615 "trtype": "tcp", 00:05:15.615 "method": "nvmf_get_transports", 00:05:15.615 "req_id": 1 00:05:15.615 } 00:05:15.615 Got JSON-RPC error response 00:05:15.615 response: 00:05:15.615 { 00:05:15.615 "code": -19, 00:05:15.615 "message": "No such device" 00:05:15.615 } 00:05:15.615 08:04:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:15.615 08:04:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:15.615 08:04:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.615 08:04:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:15.615 [2024-11-28 08:04:12.798442] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:15.615 08:04:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.615 08:04:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:15.615 08:04:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.615 08:04:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:15.875 08:04:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.875 08:04:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:15.875 { 00:05:15.875 "subsystems": [ 00:05:15.875 { 00:05:15.875 "subsystem": "fsdev", 00:05:15.875 "config": [ 00:05:15.875 { 00:05:15.875 "method": "fsdev_set_opts", 00:05:15.875 "params": { 00:05:15.875 "fsdev_io_pool_size": 65535, 00:05:15.875 "fsdev_io_cache_size": 256 00:05:15.875 } 00:05:15.875 } 00:05:15.875 ] 00:05:15.875 }, 00:05:15.875 { 00:05:15.875 "subsystem": "vfio_user_target", 00:05:15.875 "config": null 00:05:15.875 }, 00:05:15.875 { 00:05:15.875 "subsystem": "keyring", 00:05:15.875 "config": [] 00:05:15.875 }, 00:05:15.875 { 00:05:15.875 "subsystem": "iobuf", 00:05:15.875 "config": [ 00:05:15.875 { 00:05:15.875 "method": "iobuf_set_options", 00:05:15.875 "params": { 00:05:15.875 "small_pool_count": 8192, 00:05:15.875 "large_pool_count": 1024, 00:05:15.875 "small_bufsize": 8192, 00:05:15.875 "large_bufsize": 135168, 00:05:15.875 "enable_numa": false 00:05:15.875 } 00:05:15.875 } 00:05:15.875 ] 00:05:15.875 }, 00:05:15.875 { 00:05:15.875 "subsystem": "sock", 00:05:15.875 "config": [ 00:05:15.875 { 00:05:15.875 "method": "sock_set_default_impl", 00:05:15.875 "params": { 00:05:15.875 "impl_name": "posix" 00:05:15.875 } 00:05:15.875 }, 00:05:15.875 { 00:05:15.875 "method": "sock_impl_set_options", 00:05:15.875 "params": { 00:05:15.875 "impl_name": "ssl", 00:05:15.875 "recv_buf_size": 4096, 00:05:15.875 "send_buf_size": 4096, 00:05:15.876 "enable_recv_pipe": true, 00:05:15.876 "enable_quickack": false, 00:05:15.876 "enable_placement_id": 0, 00:05:15.876 "enable_zerocopy_send_server": true, 00:05:15.876 "enable_zerocopy_send_client": false, 00:05:15.876 "zerocopy_threshold": 0, 00:05:15.876 "tls_version": 0, 00:05:15.876 "enable_ktls": false 00:05:15.876 } 00:05:15.876 }, 00:05:15.876 { 00:05:15.876 "method": "sock_impl_set_options", 00:05:15.876 "params": { 00:05:15.876 "impl_name": "posix", 00:05:15.876 "recv_buf_size": 2097152, 00:05:15.876 "send_buf_size": 2097152, 00:05:15.876 "enable_recv_pipe": true, 00:05:15.876 "enable_quickack": false, 00:05:15.876 "enable_placement_id": 0, 00:05:15.876 "enable_zerocopy_send_server": true, 00:05:15.876 "enable_zerocopy_send_client": false, 00:05:15.876 "zerocopy_threshold": 0, 00:05:15.876 "tls_version": 0, 00:05:15.876 "enable_ktls": false 00:05:15.876 } 00:05:15.876 } 00:05:15.876 ] 00:05:15.876 }, 00:05:15.876 { 00:05:15.876 "subsystem": "vmd", 00:05:15.876 "config": [] 00:05:15.876 }, 00:05:15.876 { 00:05:15.876 "subsystem": "accel", 00:05:15.876 "config": [ 00:05:15.876 { 00:05:15.876 "method": "accel_set_options", 00:05:15.876 "params": { 00:05:15.876 "small_cache_size": 128, 00:05:15.876 "large_cache_size": 16, 00:05:15.876 "task_count": 2048, 00:05:15.876 "sequence_count": 2048, 00:05:15.876 "buf_count": 2048 00:05:15.876 } 00:05:15.876 } 00:05:15.876 ] 00:05:15.876 }, 00:05:15.876 { 00:05:15.876 "subsystem": "bdev", 00:05:15.876 "config": [ 00:05:15.876 { 00:05:15.876 "method": "bdev_set_options", 00:05:15.876 "params": { 00:05:15.876 "bdev_io_pool_size": 65535, 00:05:15.876 "bdev_io_cache_size": 256, 00:05:15.876 "bdev_auto_examine": true, 00:05:15.876 "iobuf_small_cache_size": 128, 00:05:15.876 "iobuf_large_cache_size": 16 00:05:15.876 } 00:05:15.876 }, 00:05:15.876 { 00:05:15.876 "method": "bdev_raid_set_options", 00:05:15.876 "params": { 00:05:15.876 "process_window_size_kb": 1024, 00:05:15.876 "process_max_bandwidth_mb_sec": 0 00:05:15.876 } 00:05:15.876 }, 00:05:15.876 { 00:05:15.876 "method": "bdev_iscsi_set_options", 00:05:15.876 "params": { 00:05:15.876 "timeout_sec": 30 00:05:15.876 } 00:05:15.876 }, 00:05:15.876 { 00:05:15.876 "method": "bdev_nvme_set_options", 00:05:15.876 "params": { 00:05:15.876 "action_on_timeout": "none", 00:05:15.876 "timeout_us": 0, 00:05:15.876 "timeout_admin_us": 0, 00:05:15.876 "keep_alive_timeout_ms": 10000, 00:05:15.876 "arbitration_burst": 0, 00:05:15.876 "low_priority_weight": 0, 00:05:15.876 "medium_priority_weight": 0, 00:05:15.876 "high_priority_weight": 0, 00:05:15.876 "nvme_adminq_poll_period_us": 10000, 00:05:15.876 "nvme_ioq_poll_period_us": 0, 00:05:15.876 "io_queue_requests": 0, 00:05:15.876 "delay_cmd_submit": true, 00:05:15.876 "transport_retry_count": 4, 00:05:15.876 "bdev_retry_count": 3, 00:05:15.876 "transport_ack_timeout": 0, 00:05:15.876 "ctrlr_loss_timeout_sec": 0, 00:05:15.876 "reconnect_delay_sec": 0, 00:05:15.876 "fast_io_fail_timeout_sec": 0, 00:05:15.876 "disable_auto_failback": false, 00:05:15.876 "generate_uuids": false, 00:05:15.876 "transport_tos": 0, 00:05:15.876 "nvme_error_stat": false, 00:05:15.876 "rdma_srq_size": 0, 00:05:15.876 "io_path_stat": false, 00:05:15.876 "allow_accel_sequence": false, 00:05:15.876 "rdma_max_cq_size": 0, 00:05:15.876 "rdma_cm_event_timeout_ms": 0, 00:05:15.876 "dhchap_digests": [ 00:05:15.876 "sha256", 00:05:15.876 "sha384", 00:05:15.876 "sha512" 00:05:15.876 ], 00:05:15.876 "dhchap_dhgroups": [ 00:05:15.876 "null", 00:05:15.876 "ffdhe2048", 00:05:15.876 "ffdhe3072", 00:05:15.876 "ffdhe4096", 00:05:15.876 "ffdhe6144", 00:05:15.876 "ffdhe8192" 00:05:15.876 ] 00:05:15.876 } 00:05:15.876 }, 00:05:15.876 { 00:05:15.876 "method": "bdev_nvme_set_hotplug", 00:05:15.876 "params": { 00:05:15.876 "period_us": 100000, 00:05:15.876 "enable": false 00:05:15.876 } 00:05:15.876 }, 00:05:15.876 { 00:05:15.876 "method": "bdev_wait_for_examine" 00:05:15.876 } 00:05:15.876 ] 00:05:15.876 }, 00:05:15.876 { 00:05:15.876 "subsystem": "scsi", 00:05:15.876 "config": null 00:05:15.876 }, 00:05:15.876 { 00:05:15.876 "subsystem": "scheduler", 00:05:15.876 "config": [ 00:05:15.876 { 00:05:15.876 "method": "framework_set_scheduler", 00:05:15.876 "params": { 00:05:15.876 "name": "static" 00:05:15.876 } 00:05:15.876 } 00:05:15.876 ] 00:05:15.876 }, 00:05:15.876 { 00:05:15.876 "subsystem": "vhost_scsi", 00:05:15.876 "config": [] 00:05:15.876 }, 00:05:15.876 { 00:05:15.876 "subsystem": "vhost_blk", 00:05:15.876 "config": [] 00:05:15.876 }, 00:05:15.876 { 00:05:15.876 "subsystem": "ublk", 00:05:15.876 "config": [] 00:05:15.876 }, 00:05:15.876 { 00:05:15.876 "subsystem": "nbd", 00:05:15.876 "config": [] 00:05:15.876 }, 00:05:15.876 { 00:05:15.876 "subsystem": "nvmf", 00:05:15.876 "config": [ 00:05:15.876 { 00:05:15.876 "method": "nvmf_set_config", 00:05:15.876 "params": { 00:05:15.876 "discovery_filter": "match_any", 00:05:15.876 "admin_cmd_passthru": { 00:05:15.876 "identify_ctrlr": false 00:05:15.876 }, 00:05:15.876 "dhchap_digests": [ 00:05:15.876 "sha256", 00:05:15.876 "sha384", 00:05:15.876 "sha512" 00:05:15.876 ], 00:05:15.876 "dhchap_dhgroups": [ 00:05:15.876 "null", 00:05:15.876 "ffdhe2048", 00:05:15.876 "ffdhe3072", 00:05:15.876 "ffdhe4096", 00:05:15.876 "ffdhe6144", 00:05:15.876 "ffdhe8192" 00:05:15.876 ] 00:05:15.876 } 00:05:15.876 }, 00:05:15.876 { 00:05:15.876 "method": "nvmf_set_max_subsystems", 00:05:15.876 "params": { 00:05:15.876 "max_subsystems": 1024 00:05:15.876 } 00:05:15.876 }, 00:05:15.876 { 00:05:15.876 "method": "nvmf_set_crdt", 00:05:15.876 "params": { 00:05:15.876 "crdt1": 0, 00:05:15.876 "crdt2": 0, 00:05:15.876 "crdt3": 0 00:05:15.876 } 00:05:15.876 }, 00:05:15.876 { 00:05:15.876 "method": "nvmf_create_transport", 00:05:15.876 "params": { 00:05:15.876 "trtype": "TCP", 00:05:15.876 "max_queue_depth": 128, 00:05:15.876 "max_io_qpairs_per_ctrlr": 127, 00:05:15.876 "in_capsule_data_size": 4096, 00:05:15.876 "max_io_size": 131072, 00:05:15.876 "io_unit_size": 131072, 00:05:15.876 "max_aq_depth": 128, 00:05:15.876 "num_shared_buffers": 511, 00:05:15.876 "buf_cache_size": 4294967295, 00:05:15.876 "dif_insert_or_strip": false, 00:05:15.876 "zcopy": false, 00:05:15.876 "c2h_success": true, 00:05:15.876 "sock_priority": 0, 00:05:15.876 "abort_timeout_sec": 1, 00:05:15.876 "ack_timeout": 0, 00:05:15.876 "data_wr_pool_size": 0 00:05:15.876 } 00:05:15.876 } 00:05:15.876 ] 00:05:15.876 }, 00:05:15.876 { 00:05:15.876 "subsystem": "iscsi", 00:05:15.876 "config": [ 00:05:15.876 { 00:05:15.876 "method": "iscsi_set_options", 00:05:15.876 "params": { 00:05:15.876 "node_base": "iqn.2016-06.io.spdk", 00:05:15.876 "max_sessions": 128, 00:05:15.876 "max_connections_per_session": 2, 00:05:15.876 "max_queue_depth": 64, 00:05:15.876 "default_time2wait": 2, 00:05:15.876 "default_time2retain": 20, 00:05:15.876 "first_burst_length": 8192, 00:05:15.876 "immediate_data": true, 00:05:15.876 "allow_duplicated_isid": false, 00:05:15.876 "error_recovery_level": 0, 00:05:15.876 "nop_timeout": 60, 00:05:15.876 "nop_in_interval": 30, 00:05:15.876 "disable_chap": false, 00:05:15.876 "require_chap": false, 00:05:15.876 "mutual_chap": false, 00:05:15.876 "chap_group": 0, 00:05:15.876 "max_large_datain_per_connection": 64, 00:05:15.876 "max_r2t_per_connection": 4, 00:05:15.876 "pdu_pool_size": 36864, 00:05:15.876 "immediate_data_pool_size": 16384, 00:05:15.876 "data_out_pool_size": 2048 00:05:15.876 } 00:05:15.876 } 00:05:15.876 ] 00:05:15.876 } 00:05:15.876 ] 00:05:15.876 } 00:05:15.876 08:04:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:15.876 08:04:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1731719 00:05:15.876 08:04:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1731719 ']' 00:05:15.876 08:04:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1731719 00:05:15.876 08:04:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:15.876 08:04:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.876 08:04:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1731719 00:05:15.876 08:04:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.876 08:04:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.876 08:04:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1731719' 00:05:15.876 killing process with pid 1731719 00:05:15.876 08:04:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1731719 00:05:15.876 08:04:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1731719 00:05:16.137 08:04:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1731853 00:05:16.137 08:04:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:16.137 08:04:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:21.425 08:04:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1731853 00:05:21.425 08:04:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1731853 ']' 00:05:21.425 08:04:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1731853 00:05:21.425 08:04:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:21.425 08:04:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.425 08:04:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1731853 00:05:21.425 08:04:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.425 08:04:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.425 08:04:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1731853' 00:05:21.425 killing process with pid 1731853 00:05:21.425 08:04:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1731853 00:05:21.425 08:04:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1731853 00:05:21.425 08:04:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:21.425 08:04:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:21.425 00:05:21.425 real 0m6.552s 00:05:21.425 user 0m6.444s 00:05:21.425 sys 0m0.566s 00:05:21.425 08:04:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.425 08:04:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:21.425 ************************************ 00:05:21.425 END TEST skip_rpc_with_json 00:05:21.425 ************************************ 00:05:21.425 08:04:18 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:21.425 08:04:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.425 08:04:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.425 08:04:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.425 ************************************ 00:05:21.425 START TEST skip_rpc_with_delay 00:05:21.425 ************************************ 00:05:21.425 08:04:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:21.425 08:04:18 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:21.425 08:04:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:21.425 08:04:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:21.425 08:04:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:21.425 08:04:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:21.425 08:04:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:21.425 08:04:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:21.426 08:04:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:21.426 08:04:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:21.426 08:04:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:21.426 08:04:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:21.426 08:04:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:21.426 [2024-11-28 08:04:18.642808] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:21.426 08:04:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:21.426 08:04:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:21.426 08:04:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:21.426 08:04:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:21.426 00:05:21.426 real 0m0.081s 00:05:21.426 user 0m0.050s 00:05:21.426 sys 0m0.031s 00:05:21.426 08:04:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.426 08:04:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:21.426 ************************************ 00:05:21.426 END TEST skip_rpc_with_delay 00:05:21.426 ************************************ 00:05:21.426 08:04:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:21.426 08:04:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:21.426 08:04:18 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:21.426 08:04:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.426 08:04:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.426 08:04:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.687 ************************************ 00:05:21.687 START TEST exit_on_failed_rpc_init 00:05:21.687 ************************************ 00:05:21.687 08:04:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:21.687 08:04:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1733118 00:05:21.687 08:04:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1733118 00:05:21.687 08:04:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.687 08:04:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1733118 ']' 00:05:21.687 08:04:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.687 08:04:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.687 08:04:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.687 08:04:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.687 08:04:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:21.687 [2024-11-28 08:04:18.804264] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:05:21.687 [2024-11-28 08:04:18.804312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1733118 ] 00:05:21.687 [2024-11-28 08:04:18.887486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.687 [2024-11-28 08:04:18.919411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.632 08:04:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.632 08:04:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:22.632 08:04:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.632 08:04:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:22.632 08:04:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:22.632 08:04:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:22.632 08:04:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:22.632 08:04:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.632 08:04:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:22.632 08:04:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.632 08:04:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:22.632 08:04:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.632 08:04:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:22.632 08:04:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:22.632 08:04:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:22.632 [2024-11-28 08:04:19.664554] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:05:22.632 [2024-11-28 08:04:19.664604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1733142 ] 00:05:22.632 [2024-11-28 08:04:19.752899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.632 [2024-11-28 08:04:19.788933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.632 [2024-11-28 08:04:19.788988] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:22.632 [2024-11-28 08:04:19.788998] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:22.632 [2024-11-28 08:04:19.789005] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:22.632 08:04:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:22.632 08:04:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:22.632 08:04:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:22.632 08:04:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:22.632 08:04:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:22.632 08:04:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:22.632 08:04:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:22.632 08:04:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1733118 00:05:22.632 08:04:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1733118 ']' 00:05:22.632 08:04:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1733118 00:05:22.632 08:04:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:22.632 08:04:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.632 08:04:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1733118 00:05:22.632 08:04:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.632 08:04:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.632 08:04:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1733118' 00:05:22.632 killing process with pid 1733118 00:05:22.632 08:04:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1733118 00:05:22.632 08:04:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1733118 00:05:22.894 00:05:22.894 real 0m1.339s 00:05:22.894 user 0m1.590s 00:05:22.894 sys 0m0.373s 00:05:22.894 08:04:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.894 08:04:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:22.894 ************************************ 00:05:22.894 END TEST exit_on_failed_rpc_init 00:05:22.894 ************************************ 00:05:22.894 08:04:20 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:22.894 00:05:22.894 real 0m13.778s 00:05:22.894 user 0m13.336s 00:05:22.894 sys 0m1.603s 00:05:22.894 08:04:20 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.894 08:04:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.894 ************************************ 00:05:22.894 END TEST skip_rpc 00:05:22.894 ************************************ 00:05:22.894 08:04:20 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:22.894 08:04:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.894 08:04:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.894 08:04:20 -- common/autotest_common.sh@10 -- # set +x 00:05:23.156 ************************************ 00:05:23.156 START TEST rpc_client 00:05:23.156 ************************************ 00:05:23.156 08:04:20 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:23.156 * Looking for test storage... 00:05:23.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:23.156 08:04:20 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:23.156 08:04:20 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:23.156 08:04:20 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:23.156 08:04:20 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:23.156 08:04:20 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.156 08:04:20 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.156 08:04:20 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.156 08:04:20 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.156 08:04:20 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.156 08:04:20 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.156 08:04:20 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.156 08:04:20 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.156 08:04:20 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.156 08:04:20 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.156 08:04:20 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.156 08:04:20 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:23.156 08:04:20 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:23.156 08:04:20 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.156 08:04:20 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.156 08:04:20 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:23.156 08:04:20 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:23.156 08:04:20 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.156 08:04:20 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:23.156 08:04:20 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.156 08:04:20 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:23.156 08:04:20 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:23.156 08:04:20 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.156 08:04:20 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:23.156 08:04:20 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.156 08:04:20 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.156 08:04:20 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.156 08:04:20 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:23.156 08:04:20 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.156 08:04:20 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:23.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.156 --rc genhtml_branch_coverage=1 00:05:23.156 --rc genhtml_function_coverage=1 00:05:23.156 --rc genhtml_legend=1 00:05:23.156 --rc geninfo_all_blocks=1 00:05:23.156 --rc geninfo_unexecuted_blocks=1 00:05:23.156 00:05:23.156 ' 00:05:23.156 08:04:20 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:23.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.156 --rc genhtml_branch_coverage=1 00:05:23.156 --rc genhtml_function_coverage=1 00:05:23.156 --rc genhtml_legend=1 00:05:23.156 --rc geninfo_all_blocks=1 00:05:23.156 --rc geninfo_unexecuted_blocks=1 00:05:23.156 00:05:23.156 ' 00:05:23.156 08:04:20 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:23.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.156 --rc genhtml_branch_coverage=1 00:05:23.156 --rc genhtml_function_coverage=1 00:05:23.156 --rc genhtml_legend=1 00:05:23.156 --rc geninfo_all_blocks=1 00:05:23.156 --rc geninfo_unexecuted_blocks=1 00:05:23.156 00:05:23.156 ' 00:05:23.156 08:04:20 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:23.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.156 --rc genhtml_branch_coverage=1 00:05:23.156 --rc genhtml_function_coverage=1 00:05:23.156 --rc genhtml_legend=1 00:05:23.156 --rc geninfo_all_blocks=1 00:05:23.156 --rc geninfo_unexecuted_blocks=1 00:05:23.156 00:05:23.156 ' 00:05:23.156 08:04:20 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:23.156 OK 00:05:23.156 08:04:20 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:23.156 00:05:23.156 real 0m0.228s 00:05:23.156 user 0m0.122s 00:05:23.156 sys 0m0.121s 00:05:23.156 08:04:20 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.157 08:04:20 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:23.157 ************************************ 00:05:23.157 END TEST rpc_client 00:05:23.157 ************************************ 00:05:23.418 08:04:20 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:23.418 08:04:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.418 08:04:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.418 08:04:20 -- common/autotest_common.sh@10 -- # set +x 00:05:23.418 ************************************ 00:05:23.418 START TEST json_config 00:05:23.418 ************************************ 00:05:23.418 08:04:20 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:23.418 08:04:20 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:23.418 08:04:20 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:23.418 08:04:20 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:23.418 08:04:20 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:23.418 08:04:20 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.418 08:04:20 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.418 08:04:20 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.418 08:04:20 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.418 08:04:20 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.418 08:04:20 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.418 08:04:20 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.418 08:04:20 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.418 08:04:20 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.418 08:04:20 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.418 08:04:20 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.418 08:04:20 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:23.418 08:04:20 json_config -- scripts/common.sh@345 -- # : 1 00:05:23.418 08:04:20 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.418 08:04:20 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.418 08:04:20 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:23.418 08:04:20 json_config -- scripts/common.sh@353 -- # local d=1 00:05:23.418 08:04:20 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.418 08:04:20 json_config -- scripts/common.sh@355 -- # echo 1 00:05:23.419 08:04:20 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.419 08:04:20 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:23.419 08:04:20 json_config -- scripts/common.sh@353 -- # local d=2 00:05:23.419 08:04:20 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.419 08:04:20 json_config -- scripts/common.sh@355 -- # echo 2 00:05:23.419 08:04:20 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.419 08:04:20 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.419 08:04:20 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.419 08:04:20 json_config -- scripts/common.sh@368 -- # return 0 00:05:23.419 08:04:20 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.419 08:04:20 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:23.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.419 --rc genhtml_branch_coverage=1 00:05:23.419 --rc genhtml_function_coverage=1 00:05:23.419 --rc genhtml_legend=1 00:05:23.419 --rc geninfo_all_blocks=1 00:05:23.419 --rc geninfo_unexecuted_blocks=1 00:05:23.419 00:05:23.419 ' 00:05:23.419 08:04:20 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:23.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.419 --rc genhtml_branch_coverage=1 00:05:23.419 --rc genhtml_function_coverage=1 00:05:23.419 --rc genhtml_legend=1 00:05:23.419 --rc geninfo_all_blocks=1 00:05:23.419 --rc geninfo_unexecuted_blocks=1 00:05:23.419 00:05:23.419 ' 00:05:23.419 08:04:20 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:23.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.419 --rc genhtml_branch_coverage=1 00:05:23.419 --rc genhtml_function_coverage=1 00:05:23.419 --rc genhtml_legend=1 00:05:23.419 --rc geninfo_all_blocks=1 00:05:23.419 --rc geninfo_unexecuted_blocks=1 00:05:23.419 00:05:23.419 ' 00:05:23.419 08:04:20 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:23.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.419 --rc genhtml_branch_coverage=1 00:05:23.419 --rc genhtml_function_coverage=1 00:05:23.419 --rc genhtml_legend=1 00:05:23.419 --rc geninfo_all_blocks=1 00:05:23.419 --rc geninfo_unexecuted_blocks=1 00:05:23.419 00:05:23.419 ' 00:05:23.419 08:04:20 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:23.419 08:04:20 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:23.419 08:04:20 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:23.419 08:04:20 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:23.419 08:04:20 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:23.419 08:04:20 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:23.419 08:04:20 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:23.419 08:04:20 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:23.419 08:04:20 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:23.419 08:04:20 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:23.419 08:04:20 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:23.419 08:04:20 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:23.419 08:04:20 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:23.419 08:04:20 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:23.419 08:04:20 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:23.419 08:04:20 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:23.419 08:04:20 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:23.419 08:04:20 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:23.419 08:04:20 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:23.419 08:04:20 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:23.419 08:04:20 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:23.419 08:04:20 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:23.419 08:04:20 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:23.419 08:04:20 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.419 08:04:20 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.419 08:04:20 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.419 08:04:20 json_config -- paths/export.sh@5 -- # export PATH 00:05:23.419 08:04:20 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.419 08:04:20 json_config -- nvmf/common.sh@51 -- # : 0 00:05:23.419 08:04:20 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:23.419 08:04:20 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:23.419 08:04:20 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:23.419 08:04:20 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:23.419 08:04:20 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:23.419 08:04:20 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:23.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:23.419 08:04:20 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:23.419 08:04:20 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:23.419 08:04:20 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:23.681 08:04:20 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:23.681 08:04:20 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:23.681 08:04:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:23.681 08:04:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:23.681 08:04:20 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:23.681 08:04:20 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:23.681 08:04:20 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:23.681 08:04:20 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:23.681 08:04:20 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:23.681 08:04:20 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:23.681 08:04:20 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:23.681 08:04:20 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:23.681 08:04:20 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:23.681 08:04:20 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:23.681 08:04:20 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:23.681 08:04:20 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:23.681 INFO: JSON configuration test init 00:05:23.681 08:04:20 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:23.681 08:04:20 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:23.681 08:04:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:23.681 08:04:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.681 08:04:20 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:23.681 08:04:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:23.681 08:04:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.681 08:04:20 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:23.681 08:04:20 json_config -- json_config/common.sh@9 -- # local app=target 00:05:23.681 08:04:20 json_config -- json_config/common.sh@10 -- # shift 00:05:23.681 08:04:20 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:23.681 08:04:20 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:23.681 08:04:20 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:23.681 08:04:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.681 08:04:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.681 08:04:20 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1733596 00:05:23.681 08:04:20 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:23.681 Waiting for target to run... 00:05:23.681 08:04:20 json_config -- json_config/common.sh@25 -- # waitforlisten 1733596 /var/tmp/spdk_tgt.sock 00:05:23.681 08:04:20 json_config -- common/autotest_common.sh@835 -- # '[' -z 1733596 ']' 00:05:23.681 08:04:20 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:23.681 08:04:20 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.681 08:04:20 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:23.681 08:04:20 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:23.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:23.681 08:04:20 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.681 08:04:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.681 [2024-11-28 08:04:20.788329] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:05:23.681 [2024-11-28 08:04:20.788406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1733596 ] 00:05:23.943 [2024-11-28 08:04:21.082426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.943 [2024-11-28 08:04:21.113171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.514 08:04:21 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.514 08:04:21 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:24.514 08:04:21 json_config -- json_config/common.sh@26 -- # echo '' 00:05:24.514 00:05:24.514 08:04:21 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:24.514 08:04:21 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:24.514 08:04:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.514 08:04:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.514 08:04:21 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:24.514 08:04:21 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:24.514 08:04:21 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:24.514 08:04:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.514 08:04:21 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:24.514 08:04:21 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:24.514 08:04:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:25.086 08:04:22 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:25.086 08:04:22 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:25.086 08:04:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:25.086 08:04:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.086 08:04:22 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:25.086 08:04:22 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:25.086 08:04:22 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:25.086 08:04:22 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:25.086 08:04:22 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:25.086 08:04:22 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:25.087 08:04:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:25.087 08:04:22 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:25.087 08:04:22 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:25.087 08:04:22 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:25.087 08:04:22 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:25.087 08:04:22 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:25.087 08:04:22 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:25.087 08:04:22 json_config -- json_config/json_config.sh@54 -- # sort 00:05:25.087 08:04:22 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:25.349 08:04:22 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:25.349 08:04:22 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:25.349 08:04:22 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:25.349 08:04:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:25.349 08:04:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.349 08:04:22 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:25.349 08:04:22 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:25.349 08:04:22 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:25.349 08:04:22 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:25.349 08:04:22 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:25.349 08:04:22 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:25.349 08:04:22 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:25.349 08:04:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:25.349 08:04:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.349 08:04:22 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:25.349 08:04:22 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:25.349 08:04:22 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:25.349 08:04:22 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:25.349 08:04:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:25.349 MallocForNvmf0 00:05:25.349 08:04:22 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:25.349 08:04:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:25.610 MallocForNvmf1 00:05:25.610 08:04:22 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:25.610 08:04:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:25.871 [2024-11-28 08:04:22.910245] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:25.871 08:04:22 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:25.871 08:04:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:25.871 08:04:23 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:25.871 08:04:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:26.132 08:04:23 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:26.132 08:04:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:26.394 08:04:23 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:26.394 08:04:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:26.394 [2024-11-28 08:04:23.580311] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:26.394 08:04:23 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:26.394 08:04:23 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:26.394 08:04:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.394 08:04:23 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:26.394 08:04:23 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:26.394 08:04:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.394 08:04:23 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:26.394 08:04:23 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:26.394 08:04:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:26.655 MallocBdevForConfigChangeCheck 00:05:26.655 08:04:23 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:26.655 08:04:23 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:26.655 08:04:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.655 08:04:23 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:26.655 08:04:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:26.916 08:04:24 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:26.916 INFO: shutting down applications... 00:05:26.916 08:04:24 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:26.916 08:04:24 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:26.916 08:04:24 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:26.916 08:04:24 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:27.488 Calling clear_iscsi_subsystem 00:05:27.488 Calling clear_nvmf_subsystem 00:05:27.488 Calling clear_nbd_subsystem 00:05:27.488 Calling clear_ublk_subsystem 00:05:27.488 Calling clear_vhost_blk_subsystem 00:05:27.488 Calling clear_vhost_scsi_subsystem 00:05:27.488 Calling clear_bdev_subsystem 00:05:27.488 08:04:24 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:27.488 08:04:24 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:27.488 08:04:24 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:27.488 08:04:24 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:27.488 08:04:24 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:27.488 08:04:24 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:27.749 08:04:24 json_config -- json_config/json_config.sh@352 -- # break 00:05:27.749 08:04:24 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:27.749 08:04:24 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:27.749 08:04:24 json_config -- json_config/common.sh@31 -- # local app=target 00:05:27.749 08:04:24 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:27.749 08:04:24 json_config -- json_config/common.sh@35 -- # [[ -n 1733596 ]] 00:05:27.749 08:04:24 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1733596 00:05:27.749 08:04:24 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:27.749 08:04:24 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:27.749 08:04:24 json_config -- json_config/common.sh@41 -- # kill -0 1733596 00:05:27.749 08:04:24 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:28.320 08:04:25 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:28.320 08:04:25 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:28.320 08:04:25 json_config -- json_config/common.sh@41 -- # kill -0 1733596 00:05:28.320 08:04:25 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:28.320 08:04:25 json_config -- json_config/common.sh@43 -- # break 00:05:28.320 08:04:25 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:28.320 08:04:25 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:28.320 SPDK target shutdown done 00:05:28.320 08:04:25 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:28.320 INFO: relaunching applications... 00:05:28.320 08:04:25 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:28.320 08:04:25 json_config -- json_config/common.sh@9 -- # local app=target 00:05:28.320 08:04:25 json_config -- json_config/common.sh@10 -- # shift 00:05:28.320 08:04:25 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:28.320 08:04:25 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:28.320 08:04:25 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:28.320 08:04:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.320 08:04:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.320 08:04:25 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1734737 00:05:28.320 08:04:25 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:28.320 Waiting for target to run... 00:05:28.320 08:04:25 json_config -- json_config/common.sh@25 -- # waitforlisten 1734737 /var/tmp/spdk_tgt.sock 00:05:28.320 08:04:25 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:28.320 08:04:25 json_config -- common/autotest_common.sh@835 -- # '[' -z 1734737 ']' 00:05:28.320 08:04:25 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:28.320 08:04:25 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.320 08:04:25 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:28.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:28.320 08:04:25 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.320 08:04:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.320 [2024-11-28 08:04:25.541278] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:05:28.320 [2024-11-28 08:04:25.541336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1734737 ] 00:05:28.582 [2024-11-28 08:04:25.841299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.582 [2024-11-28 08:04:25.866218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.155 [2024-11-28 08:04:26.363895] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:29.155 [2024-11-28 08:04:26.396282] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:29.155 08:04:26 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.155 08:04:26 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:29.155 08:04:26 json_config -- json_config/common.sh@26 -- # echo '' 00:05:29.155 00:05:29.155 08:04:26 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:29.155 08:04:26 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:29.155 INFO: Checking if target configuration is the same... 00:05:29.155 08:04:26 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:29.155 08:04:26 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:29.155 08:04:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:29.155 + '[' 2 -ne 2 ']' 00:05:29.416 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:29.416 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:29.416 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:29.416 +++ basename /dev/fd/62 00:05:29.416 ++ mktemp /tmp/62.XXX 00:05:29.416 + tmp_file_1=/tmp/62.QsN 00:05:29.416 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:29.416 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:29.416 + tmp_file_2=/tmp/spdk_tgt_config.json.R5O 00:05:29.416 + ret=0 00:05:29.416 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:29.676 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:29.676 + diff -u /tmp/62.QsN /tmp/spdk_tgt_config.json.R5O 00:05:29.676 + echo 'INFO: JSON config files are the same' 00:05:29.676 INFO: JSON config files are the same 00:05:29.676 + rm /tmp/62.QsN /tmp/spdk_tgt_config.json.R5O 00:05:29.676 + exit 0 00:05:29.676 08:04:26 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:29.676 08:04:26 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:29.676 INFO: changing configuration and checking if this can be detected... 00:05:29.676 08:04:26 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:29.676 08:04:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:29.936 08:04:27 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:29.936 08:04:27 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:29.936 08:04:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:29.936 + '[' 2 -ne 2 ']' 00:05:29.936 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:29.936 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:29.936 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:29.936 +++ basename /dev/fd/62 00:05:29.936 ++ mktemp /tmp/62.XXX 00:05:29.936 + tmp_file_1=/tmp/62.ecp 00:05:29.936 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:29.936 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:29.936 + tmp_file_2=/tmp/spdk_tgt_config.json.3Fb 00:05:29.936 + ret=0 00:05:29.936 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:30.197 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:30.197 + diff -u /tmp/62.ecp /tmp/spdk_tgt_config.json.3Fb 00:05:30.197 + ret=1 00:05:30.197 + echo '=== Start of file: /tmp/62.ecp ===' 00:05:30.197 + cat /tmp/62.ecp 00:05:30.197 + echo '=== End of file: /tmp/62.ecp ===' 00:05:30.197 + echo '' 00:05:30.197 + echo '=== Start of file: /tmp/spdk_tgt_config.json.3Fb ===' 00:05:30.197 + cat /tmp/spdk_tgt_config.json.3Fb 00:05:30.197 + echo '=== End of file: /tmp/spdk_tgt_config.json.3Fb ===' 00:05:30.197 + echo '' 00:05:30.197 + rm /tmp/62.ecp /tmp/spdk_tgt_config.json.3Fb 00:05:30.197 + exit 1 00:05:30.197 08:04:27 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:30.197 INFO: configuration change detected. 00:05:30.197 08:04:27 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:30.197 08:04:27 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:30.197 08:04:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:30.197 08:04:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.197 08:04:27 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:30.197 08:04:27 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:30.197 08:04:27 json_config -- json_config/json_config.sh@324 -- # [[ -n 1734737 ]] 00:05:30.197 08:04:27 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:30.197 08:04:27 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:30.198 08:04:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:30.198 08:04:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.198 08:04:27 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:30.198 08:04:27 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:30.198 08:04:27 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:30.198 08:04:27 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:30.198 08:04:27 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:30.198 08:04:27 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:30.198 08:04:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:30.198 08:04:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.198 08:04:27 json_config -- json_config/json_config.sh@330 -- # killprocess 1734737 00:05:30.198 08:04:27 json_config -- common/autotest_common.sh@954 -- # '[' -z 1734737 ']' 00:05:30.198 08:04:27 json_config -- common/autotest_common.sh@958 -- # kill -0 1734737 00:05:30.198 08:04:27 json_config -- common/autotest_common.sh@959 -- # uname 00:05:30.198 08:04:27 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.198 08:04:27 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1734737 00:05:30.458 08:04:27 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.458 08:04:27 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.458 08:04:27 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1734737' 00:05:30.458 killing process with pid 1734737 00:05:30.458 08:04:27 json_config -- common/autotest_common.sh@973 -- # kill 1734737 00:05:30.458 08:04:27 json_config -- common/autotest_common.sh@978 -- # wait 1734737 00:05:30.719 08:04:27 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:30.719 08:04:27 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:30.719 08:04:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:30.719 08:04:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.719 08:04:27 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:30.719 08:04:27 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:30.719 INFO: Success 00:05:30.719 00:05:30.719 real 0m7.331s 00:05:30.719 user 0m8.821s 00:05:30.719 sys 0m1.990s 00:05:30.719 08:04:27 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.719 08:04:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.719 ************************************ 00:05:30.719 END TEST json_config 00:05:30.719 ************************************ 00:05:30.719 08:04:27 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:30.719 08:04:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.719 08:04:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.719 08:04:27 -- common/autotest_common.sh@10 -- # set +x 00:05:30.719 ************************************ 00:05:30.719 START TEST json_config_extra_key 00:05:30.719 ************************************ 00:05:30.719 08:04:27 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:30.719 08:04:27 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:30.719 08:04:27 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:30.719 08:04:27 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:30.983 08:04:28 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:30.983 08:04:28 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.983 08:04:28 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.983 08:04:28 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.983 08:04:28 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.983 08:04:28 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.983 08:04:28 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.983 08:04:28 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.983 08:04:28 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.983 08:04:28 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.983 08:04:28 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.983 08:04:28 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.983 08:04:28 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:30.983 08:04:28 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:30.983 08:04:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.983 08:04:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.983 08:04:28 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:30.983 08:04:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:30.983 08:04:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.983 08:04:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:30.983 08:04:28 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.983 08:04:28 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:30.983 08:04:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:30.983 08:04:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.983 08:04:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:30.983 08:04:28 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.983 08:04:28 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.983 08:04:28 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.983 08:04:28 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:30.983 08:04:28 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.983 08:04:28 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:30.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.983 --rc genhtml_branch_coverage=1 00:05:30.983 --rc genhtml_function_coverage=1 00:05:30.983 --rc genhtml_legend=1 00:05:30.983 --rc geninfo_all_blocks=1 00:05:30.983 --rc geninfo_unexecuted_blocks=1 00:05:30.983 00:05:30.983 ' 00:05:30.983 08:04:28 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:30.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.983 --rc genhtml_branch_coverage=1 00:05:30.983 --rc genhtml_function_coverage=1 00:05:30.983 --rc genhtml_legend=1 00:05:30.983 --rc geninfo_all_blocks=1 00:05:30.983 --rc geninfo_unexecuted_blocks=1 00:05:30.983 00:05:30.983 ' 00:05:30.983 08:04:28 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:30.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.983 --rc genhtml_branch_coverage=1 00:05:30.983 --rc genhtml_function_coverage=1 00:05:30.983 --rc genhtml_legend=1 00:05:30.983 --rc geninfo_all_blocks=1 00:05:30.983 --rc geninfo_unexecuted_blocks=1 00:05:30.983 00:05:30.983 ' 00:05:30.983 08:04:28 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:30.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.983 --rc genhtml_branch_coverage=1 00:05:30.983 --rc genhtml_function_coverage=1 00:05:30.983 --rc genhtml_legend=1 00:05:30.983 --rc geninfo_all_blocks=1 00:05:30.983 --rc geninfo_unexecuted_blocks=1 00:05:30.983 00:05:30.983 ' 00:05:30.983 08:04:28 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:30.983 08:04:28 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:30.983 08:04:28 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:30.983 08:04:28 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:30.983 08:04:28 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:30.983 08:04:28 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:30.983 08:04:28 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:30.983 08:04:28 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:30.983 08:04:28 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:30.983 08:04:28 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:30.983 08:04:28 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:30.983 08:04:28 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:30.983 08:04:28 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:30.983 08:04:28 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:30.983 08:04:28 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:30.983 08:04:28 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:30.983 08:04:28 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:30.983 08:04:28 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:30.983 08:04:28 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:30.983 08:04:28 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:30.983 08:04:28 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:30.983 08:04:28 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:30.983 08:04:28 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:30.983 08:04:28 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.983 08:04:28 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.983 08:04:28 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.983 08:04:28 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:30.983 08:04:28 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.983 08:04:28 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:30.983 08:04:28 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:30.983 08:04:28 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:30.983 08:04:28 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:30.983 08:04:28 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:30.983 08:04:28 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:30.983 08:04:28 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:30.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:30.983 08:04:28 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:30.983 08:04:28 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:30.983 08:04:28 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:30.983 08:04:28 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:30.983 08:04:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:30.983 08:04:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:30.983 08:04:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:30.983 08:04:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:30.983 08:04:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:30.983 08:04:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:30.984 08:04:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:30.984 08:04:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:30.984 08:04:28 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:30.984 08:04:28 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:30.984 INFO: launching applications... 00:05:30.984 08:04:28 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:30.984 08:04:28 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:30.984 08:04:28 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:30.984 08:04:28 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:30.984 08:04:28 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:30.984 08:04:28 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:30.984 08:04:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:30.984 08:04:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:30.984 08:04:28 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1735203 00:05:30.984 08:04:28 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:30.984 Waiting for target to run... 00:05:30.984 08:04:28 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1735203 /var/tmp/spdk_tgt.sock 00:05:30.984 08:04:28 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1735203 ']' 00:05:30.984 08:04:28 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:30.984 08:04:28 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:30.984 08:04:28 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.984 08:04:28 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:30.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:30.984 08:04:28 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.984 08:04:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:30.984 [2024-11-28 08:04:28.170328] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:05:30.984 [2024-11-28 08:04:28.170380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1735203 ] 00:05:31.245 [2024-11-28 08:04:28.478131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.245 [2024-11-28 08:04:28.507915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.816 08:04:28 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.816 08:04:28 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:31.816 08:04:28 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:31.816 00:05:31.816 08:04:28 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:31.816 INFO: shutting down applications... 00:05:31.816 08:04:28 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:31.816 08:04:28 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:31.816 08:04:28 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:31.816 08:04:28 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1735203 ]] 00:05:31.816 08:04:28 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1735203 00:05:31.816 08:04:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:31.816 08:04:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:31.816 08:04:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1735203 00:05:31.816 08:04:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:32.387 08:04:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:32.387 08:04:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:32.387 08:04:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1735203 00:05:32.387 08:04:29 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:32.387 08:04:29 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:32.387 08:04:29 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:32.387 08:04:29 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:32.387 SPDK target shutdown done 00:05:32.387 08:04:29 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:32.387 Success 00:05:32.387 00:05:32.387 real 0m1.556s 00:05:32.387 user 0m1.154s 00:05:32.387 sys 0m0.413s 00:05:32.387 08:04:29 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.387 08:04:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:32.387 ************************************ 00:05:32.387 END TEST json_config_extra_key 00:05:32.387 ************************************ 00:05:32.387 08:04:29 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:32.387 08:04:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.387 08:04:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.387 08:04:29 -- common/autotest_common.sh@10 -- # set +x 00:05:32.387 ************************************ 00:05:32.387 START TEST alias_rpc 00:05:32.387 ************************************ 00:05:32.387 08:04:29 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:32.387 * Looking for test storage... 00:05:32.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:32.387 08:04:29 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:32.387 08:04:29 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:32.387 08:04:29 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:32.648 08:04:29 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:32.648 08:04:29 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.648 08:04:29 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.648 08:04:29 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.648 08:04:29 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.648 08:04:29 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.648 08:04:29 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.648 08:04:29 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.648 08:04:29 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.648 08:04:29 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.648 08:04:29 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.648 08:04:29 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.648 08:04:29 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:32.649 08:04:29 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:32.649 08:04:29 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.649 08:04:29 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.649 08:04:29 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:32.649 08:04:29 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:32.649 08:04:29 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.649 08:04:29 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:32.649 08:04:29 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.649 08:04:29 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:32.649 08:04:29 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:32.649 08:04:29 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.649 08:04:29 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:32.649 08:04:29 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.649 08:04:29 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.649 08:04:29 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.649 08:04:29 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:32.649 08:04:29 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.649 08:04:29 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:32.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.649 --rc genhtml_branch_coverage=1 00:05:32.649 --rc genhtml_function_coverage=1 00:05:32.649 --rc genhtml_legend=1 00:05:32.649 --rc geninfo_all_blocks=1 00:05:32.649 --rc geninfo_unexecuted_blocks=1 00:05:32.649 00:05:32.649 ' 00:05:32.649 08:04:29 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:32.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.649 --rc genhtml_branch_coverage=1 00:05:32.649 --rc genhtml_function_coverage=1 00:05:32.649 --rc genhtml_legend=1 00:05:32.649 --rc geninfo_all_blocks=1 00:05:32.649 --rc geninfo_unexecuted_blocks=1 00:05:32.649 00:05:32.649 ' 00:05:32.649 08:04:29 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:32.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.649 --rc genhtml_branch_coverage=1 00:05:32.649 --rc genhtml_function_coverage=1 00:05:32.649 --rc genhtml_legend=1 00:05:32.649 --rc geninfo_all_blocks=1 00:05:32.649 --rc geninfo_unexecuted_blocks=1 00:05:32.649 00:05:32.649 ' 00:05:32.649 08:04:29 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:32.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.649 --rc genhtml_branch_coverage=1 00:05:32.649 --rc genhtml_function_coverage=1 00:05:32.649 --rc genhtml_legend=1 00:05:32.649 --rc geninfo_all_blocks=1 00:05:32.649 --rc geninfo_unexecuted_blocks=1 00:05:32.649 00:05:32.649 ' 00:05:32.649 08:04:29 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:32.649 08:04:29 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1735601 00:05:32.649 08:04:29 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1735601 00:05:32.649 08:04:29 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:32.649 08:04:29 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1735601 ']' 00:05:32.649 08:04:29 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.649 08:04:29 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.649 08:04:29 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.649 08:04:29 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.649 08:04:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.649 [2024-11-28 08:04:29.817475] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:05:32.649 [2024-11-28 08:04:29.817553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1735601 ] 00:05:32.649 [2024-11-28 08:04:29.904270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.909 [2024-11-28 08:04:29.939357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.480 08:04:30 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.480 08:04:30 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:33.480 08:04:30 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:33.741 08:04:30 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1735601 00:05:33.741 08:04:30 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1735601 ']' 00:05:33.741 08:04:30 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1735601 00:05:33.741 08:04:30 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:33.741 08:04:30 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.741 08:04:30 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1735601 00:05:33.741 08:04:30 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.741 08:04:30 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.741 08:04:30 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1735601' 00:05:33.741 killing process with pid 1735601 00:05:33.741 08:04:30 alias_rpc -- common/autotest_common.sh@973 -- # kill 1735601 00:05:33.741 08:04:30 alias_rpc -- common/autotest_common.sh@978 -- # wait 1735601 00:05:34.002 00:05:34.002 real 0m1.504s 00:05:34.002 user 0m1.667s 00:05:34.002 sys 0m0.403s 00:05:34.002 08:04:31 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.002 08:04:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.002 ************************************ 00:05:34.002 END TEST alias_rpc 00:05:34.002 ************************************ 00:05:34.002 08:04:31 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:34.002 08:04:31 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:34.002 08:04:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.002 08:04:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.002 08:04:31 -- common/autotest_common.sh@10 -- # set +x 00:05:34.002 ************************************ 00:05:34.002 START TEST spdkcli_tcp 00:05:34.002 ************************************ 00:05:34.002 08:04:31 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:34.002 * Looking for test storage... 00:05:34.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:34.002 08:04:31 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:34.002 08:04:31 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:34.002 08:04:31 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:34.264 08:04:31 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:34.264 08:04:31 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.264 08:04:31 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.264 08:04:31 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.264 08:04:31 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.264 08:04:31 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.264 08:04:31 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.264 08:04:31 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.264 08:04:31 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.264 08:04:31 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.264 08:04:31 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.264 08:04:31 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.264 08:04:31 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:34.264 08:04:31 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:34.264 08:04:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.264 08:04:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.264 08:04:31 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:34.264 08:04:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:34.264 08:04:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.264 08:04:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:34.264 08:04:31 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.264 08:04:31 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:34.264 08:04:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:34.264 08:04:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.264 08:04:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:34.264 08:04:31 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.264 08:04:31 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.264 08:04:31 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.264 08:04:31 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:34.264 08:04:31 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.264 08:04:31 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:34.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.264 --rc genhtml_branch_coverage=1 00:05:34.264 --rc genhtml_function_coverage=1 00:05:34.264 --rc genhtml_legend=1 00:05:34.264 --rc geninfo_all_blocks=1 00:05:34.264 --rc geninfo_unexecuted_blocks=1 00:05:34.264 00:05:34.264 ' 00:05:34.264 08:04:31 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:34.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.264 --rc genhtml_branch_coverage=1 00:05:34.264 --rc genhtml_function_coverage=1 00:05:34.264 --rc genhtml_legend=1 00:05:34.264 --rc geninfo_all_blocks=1 00:05:34.264 --rc geninfo_unexecuted_blocks=1 00:05:34.264 00:05:34.264 ' 00:05:34.264 08:04:31 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:34.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.265 --rc genhtml_branch_coverage=1 00:05:34.265 --rc genhtml_function_coverage=1 00:05:34.265 --rc genhtml_legend=1 00:05:34.265 --rc geninfo_all_blocks=1 00:05:34.265 --rc geninfo_unexecuted_blocks=1 00:05:34.265 00:05:34.265 ' 00:05:34.265 08:04:31 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:34.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.265 --rc genhtml_branch_coverage=1 00:05:34.265 --rc genhtml_function_coverage=1 00:05:34.265 --rc genhtml_legend=1 00:05:34.265 --rc geninfo_all_blocks=1 00:05:34.265 --rc geninfo_unexecuted_blocks=1 00:05:34.265 00:05:34.265 ' 00:05:34.265 08:04:31 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:34.265 08:04:31 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:34.265 08:04:31 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:34.265 08:04:31 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:34.265 08:04:31 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:34.265 08:04:31 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:34.265 08:04:31 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:34.265 08:04:31 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:34.265 08:04:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:34.265 08:04:31 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1736003 00:05:34.265 08:04:31 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1736003 00:05:34.265 08:04:31 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:34.265 08:04:31 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1736003 ']' 00:05:34.265 08:04:31 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.265 08:04:31 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.265 08:04:31 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.265 08:04:31 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.265 08:04:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:34.265 [2024-11-28 08:04:31.385814] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:05:34.265 [2024-11-28 08:04:31.385888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1736003 ] 00:05:34.265 [2024-11-28 08:04:31.473161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.265 [2024-11-28 08:04:31.508982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.265 [2024-11-28 08:04:31.508982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.206 08:04:32 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.206 08:04:32 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:35.206 08:04:32 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1736293 00:05:35.206 08:04:32 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:35.206 08:04:32 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:35.206 [ 00:05:35.206 "bdev_malloc_delete", 00:05:35.206 "bdev_malloc_create", 00:05:35.206 "bdev_null_resize", 00:05:35.206 "bdev_null_delete", 00:05:35.206 "bdev_null_create", 00:05:35.206 "bdev_nvme_cuse_unregister", 00:05:35.206 "bdev_nvme_cuse_register", 00:05:35.206 "bdev_opal_new_user", 00:05:35.206 "bdev_opal_set_lock_state", 00:05:35.206 "bdev_opal_delete", 00:05:35.206 "bdev_opal_get_info", 00:05:35.206 "bdev_opal_create", 00:05:35.206 "bdev_nvme_opal_revert", 00:05:35.206 "bdev_nvme_opal_init", 00:05:35.206 "bdev_nvme_send_cmd", 00:05:35.206 "bdev_nvme_set_keys", 00:05:35.206 "bdev_nvme_get_path_iostat", 00:05:35.206 "bdev_nvme_get_mdns_discovery_info", 00:05:35.206 "bdev_nvme_stop_mdns_discovery", 00:05:35.206 "bdev_nvme_start_mdns_discovery", 00:05:35.206 "bdev_nvme_set_multipath_policy", 00:05:35.206 "bdev_nvme_set_preferred_path", 00:05:35.206 "bdev_nvme_get_io_paths", 00:05:35.206 "bdev_nvme_remove_error_injection", 00:05:35.206 "bdev_nvme_add_error_injection", 00:05:35.206 "bdev_nvme_get_discovery_info", 00:05:35.206 "bdev_nvme_stop_discovery", 00:05:35.206 "bdev_nvme_start_discovery", 00:05:35.206 "bdev_nvme_get_controller_health_info", 00:05:35.206 "bdev_nvme_disable_controller", 00:05:35.206 "bdev_nvme_enable_controller", 00:05:35.206 "bdev_nvme_reset_controller", 00:05:35.206 "bdev_nvme_get_transport_statistics", 00:05:35.206 "bdev_nvme_apply_firmware", 00:05:35.206 "bdev_nvme_detach_controller", 00:05:35.206 "bdev_nvme_get_controllers", 00:05:35.206 "bdev_nvme_attach_controller", 00:05:35.206 "bdev_nvme_set_hotplug", 00:05:35.206 "bdev_nvme_set_options", 00:05:35.206 "bdev_passthru_delete", 00:05:35.206 "bdev_passthru_create", 00:05:35.206 "bdev_lvol_set_parent_bdev", 00:05:35.206 "bdev_lvol_set_parent", 00:05:35.206 "bdev_lvol_check_shallow_copy", 00:05:35.206 "bdev_lvol_start_shallow_copy", 00:05:35.206 "bdev_lvol_grow_lvstore", 00:05:35.206 "bdev_lvol_get_lvols", 00:05:35.206 "bdev_lvol_get_lvstores", 00:05:35.206 "bdev_lvol_delete", 00:05:35.206 "bdev_lvol_set_read_only", 00:05:35.206 "bdev_lvol_resize", 00:05:35.206 "bdev_lvol_decouple_parent", 00:05:35.206 "bdev_lvol_inflate", 00:05:35.206 "bdev_lvol_rename", 00:05:35.206 "bdev_lvol_clone_bdev", 00:05:35.206 "bdev_lvol_clone", 00:05:35.206 "bdev_lvol_snapshot", 00:05:35.206 "bdev_lvol_create", 00:05:35.206 "bdev_lvol_delete_lvstore", 00:05:35.206 "bdev_lvol_rename_lvstore", 00:05:35.206 "bdev_lvol_create_lvstore", 00:05:35.206 "bdev_raid_set_options", 00:05:35.206 "bdev_raid_remove_base_bdev", 00:05:35.206 "bdev_raid_add_base_bdev", 00:05:35.206 "bdev_raid_delete", 00:05:35.206 "bdev_raid_create", 00:05:35.206 "bdev_raid_get_bdevs", 00:05:35.206 "bdev_error_inject_error", 00:05:35.206 "bdev_error_delete", 00:05:35.206 "bdev_error_create", 00:05:35.207 "bdev_split_delete", 00:05:35.207 "bdev_split_create", 00:05:35.207 "bdev_delay_delete", 00:05:35.207 "bdev_delay_create", 00:05:35.207 "bdev_delay_update_latency", 00:05:35.207 "bdev_zone_block_delete", 00:05:35.207 "bdev_zone_block_create", 00:05:35.207 "blobfs_create", 00:05:35.207 "blobfs_detect", 00:05:35.207 "blobfs_set_cache_size", 00:05:35.207 "bdev_aio_delete", 00:05:35.207 "bdev_aio_rescan", 00:05:35.207 "bdev_aio_create", 00:05:35.207 "bdev_ftl_set_property", 00:05:35.207 "bdev_ftl_get_properties", 00:05:35.207 "bdev_ftl_get_stats", 00:05:35.207 "bdev_ftl_unmap", 00:05:35.207 "bdev_ftl_unload", 00:05:35.207 "bdev_ftl_delete", 00:05:35.207 "bdev_ftl_load", 00:05:35.207 "bdev_ftl_create", 00:05:35.207 "bdev_virtio_attach_controller", 00:05:35.207 "bdev_virtio_scsi_get_devices", 00:05:35.207 "bdev_virtio_detach_controller", 00:05:35.207 "bdev_virtio_blk_set_hotplug", 00:05:35.207 "bdev_iscsi_delete", 00:05:35.207 "bdev_iscsi_create", 00:05:35.207 "bdev_iscsi_set_options", 00:05:35.207 "accel_error_inject_error", 00:05:35.207 "ioat_scan_accel_module", 00:05:35.207 "dsa_scan_accel_module", 00:05:35.207 "iaa_scan_accel_module", 00:05:35.207 "vfu_virtio_create_fs_endpoint", 00:05:35.207 "vfu_virtio_create_scsi_endpoint", 00:05:35.207 "vfu_virtio_scsi_remove_target", 00:05:35.207 "vfu_virtio_scsi_add_target", 00:05:35.207 "vfu_virtio_create_blk_endpoint", 00:05:35.207 "vfu_virtio_delete_endpoint", 00:05:35.207 "keyring_file_remove_key", 00:05:35.207 "keyring_file_add_key", 00:05:35.207 "keyring_linux_set_options", 00:05:35.207 "fsdev_aio_delete", 00:05:35.207 "fsdev_aio_create", 00:05:35.207 "iscsi_get_histogram", 00:05:35.207 "iscsi_enable_histogram", 00:05:35.207 "iscsi_set_options", 00:05:35.207 "iscsi_get_auth_groups", 00:05:35.207 "iscsi_auth_group_remove_secret", 00:05:35.207 "iscsi_auth_group_add_secret", 00:05:35.207 "iscsi_delete_auth_group", 00:05:35.207 "iscsi_create_auth_group", 00:05:35.207 "iscsi_set_discovery_auth", 00:05:35.207 "iscsi_get_options", 00:05:35.207 "iscsi_target_node_request_logout", 00:05:35.207 "iscsi_target_node_set_redirect", 00:05:35.207 "iscsi_target_node_set_auth", 00:05:35.207 "iscsi_target_node_add_lun", 00:05:35.207 "iscsi_get_stats", 00:05:35.207 "iscsi_get_connections", 00:05:35.207 "iscsi_portal_group_set_auth", 00:05:35.207 "iscsi_start_portal_group", 00:05:35.207 "iscsi_delete_portal_group", 00:05:35.207 "iscsi_create_portal_group", 00:05:35.207 "iscsi_get_portal_groups", 00:05:35.207 "iscsi_delete_target_node", 00:05:35.207 "iscsi_target_node_remove_pg_ig_maps", 00:05:35.207 "iscsi_target_node_add_pg_ig_maps", 00:05:35.207 "iscsi_create_target_node", 00:05:35.207 "iscsi_get_target_nodes", 00:05:35.207 "iscsi_delete_initiator_group", 00:05:35.207 "iscsi_initiator_group_remove_initiators", 00:05:35.207 "iscsi_initiator_group_add_initiators", 00:05:35.207 "iscsi_create_initiator_group", 00:05:35.207 "iscsi_get_initiator_groups", 00:05:35.207 "nvmf_set_crdt", 00:05:35.207 "nvmf_set_config", 00:05:35.207 "nvmf_set_max_subsystems", 00:05:35.207 "nvmf_stop_mdns_prr", 00:05:35.207 "nvmf_publish_mdns_prr", 00:05:35.207 "nvmf_subsystem_get_listeners", 00:05:35.207 "nvmf_subsystem_get_qpairs", 00:05:35.207 "nvmf_subsystem_get_controllers", 00:05:35.207 "nvmf_get_stats", 00:05:35.207 "nvmf_get_transports", 00:05:35.207 "nvmf_create_transport", 00:05:35.207 "nvmf_get_targets", 00:05:35.207 "nvmf_delete_target", 00:05:35.207 "nvmf_create_target", 00:05:35.207 "nvmf_subsystem_allow_any_host", 00:05:35.207 "nvmf_subsystem_set_keys", 00:05:35.207 "nvmf_subsystem_remove_host", 00:05:35.207 "nvmf_subsystem_add_host", 00:05:35.207 "nvmf_ns_remove_host", 00:05:35.207 "nvmf_ns_add_host", 00:05:35.207 "nvmf_subsystem_remove_ns", 00:05:35.207 "nvmf_subsystem_set_ns_ana_group", 00:05:35.207 "nvmf_subsystem_add_ns", 00:05:35.207 "nvmf_subsystem_listener_set_ana_state", 00:05:35.207 "nvmf_discovery_get_referrals", 00:05:35.207 "nvmf_discovery_remove_referral", 00:05:35.207 "nvmf_discovery_add_referral", 00:05:35.207 "nvmf_subsystem_remove_listener", 00:05:35.207 "nvmf_subsystem_add_listener", 00:05:35.207 "nvmf_delete_subsystem", 00:05:35.207 "nvmf_create_subsystem", 00:05:35.207 "nvmf_get_subsystems", 00:05:35.207 "env_dpdk_get_mem_stats", 00:05:35.207 "nbd_get_disks", 00:05:35.207 "nbd_stop_disk", 00:05:35.207 "nbd_start_disk", 00:05:35.207 "ublk_recover_disk", 00:05:35.207 "ublk_get_disks", 00:05:35.207 "ublk_stop_disk", 00:05:35.207 "ublk_start_disk", 00:05:35.207 "ublk_destroy_target", 00:05:35.207 "ublk_create_target", 00:05:35.207 "virtio_blk_create_transport", 00:05:35.207 "virtio_blk_get_transports", 00:05:35.207 "vhost_controller_set_coalescing", 00:05:35.207 "vhost_get_controllers", 00:05:35.207 "vhost_delete_controller", 00:05:35.207 "vhost_create_blk_controller", 00:05:35.207 "vhost_scsi_controller_remove_target", 00:05:35.207 "vhost_scsi_controller_add_target", 00:05:35.207 "vhost_start_scsi_controller", 00:05:35.207 "vhost_create_scsi_controller", 00:05:35.207 "thread_set_cpumask", 00:05:35.207 "scheduler_set_options", 00:05:35.207 "framework_get_governor", 00:05:35.207 "framework_get_scheduler", 00:05:35.207 "framework_set_scheduler", 00:05:35.207 "framework_get_reactors", 00:05:35.207 "thread_get_io_channels", 00:05:35.207 "thread_get_pollers", 00:05:35.207 "thread_get_stats", 00:05:35.207 "framework_monitor_context_switch", 00:05:35.207 "spdk_kill_instance", 00:05:35.207 "log_enable_timestamps", 00:05:35.207 "log_get_flags", 00:05:35.207 "log_clear_flag", 00:05:35.207 "log_set_flag", 00:05:35.207 "log_get_level", 00:05:35.207 "log_set_level", 00:05:35.207 "log_get_print_level", 00:05:35.207 "log_set_print_level", 00:05:35.207 "framework_enable_cpumask_locks", 00:05:35.207 "framework_disable_cpumask_locks", 00:05:35.207 "framework_wait_init", 00:05:35.207 "framework_start_init", 00:05:35.207 "scsi_get_devices", 00:05:35.207 "bdev_get_histogram", 00:05:35.207 "bdev_enable_histogram", 00:05:35.207 "bdev_set_qos_limit", 00:05:35.207 "bdev_set_qd_sampling_period", 00:05:35.207 "bdev_get_bdevs", 00:05:35.207 "bdev_reset_iostat", 00:05:35.207 "bdev_get_iostat", 00:05:35.207 "bdev_examine", 00:05:35.207 "bdev_wait_for_examine", 00:05:35.207 "bdev_set_options", 00:05:35.207 "accel_get_stats", 00:05:35.207 "accel_set_options", 00:05:35.207 "accel_set_driver", 00:05:35.207 "accel_crypto_key_destroy", 00:05:35.207 "accel_crypto_keys_get", 00:05:35.207 "accel_crypto_key_create", 00:05:35.207 "accel_assign_opc", 00:05:35.207 "accel_get_module_info", 00:05:35.207 "accel_get_opc_assignments", 00:05:35.207 "vmd_rescan", 00:05:35.207 "vmd_remove_device", 00:05:35.207 "vmd_enable", 00:05:35.207 "sock_get_default_impl", 00:05:35.207 "sock_set_default_impl", 00:05:35.207 "sock_impl_set_options", 00:05:35.207 "sock_impl_get_options", 00:05:35.207 "iobuf_get_stats", 00:05:35.207 "iobuf_set_options", 00:05:35.207 "keyring_get_keys", 00:05:35.207 "vfu_tgt_set_base_path", 00:05:35.207 "framework_get_pci_devices", 00:05:35.207 "framework_get_config", 00:05:35.207 "framework_get_subsystems", 00:05:35.207 "fsdev_set_opts", 00:05:35.207 "fsdev_get_opts", 00:05:35.207 "trace_get_info", 00:05:35.207 "trace_get_tpoint_group_mask", 00:05:35.207 "trace_disable_tpoint_group", 00:05:35.207 "trace_enable_tpoint_group", 00:05:35.207 "trace_clear_tpoint_mask", 00:05:35.207 "trace_set_tpoint_mask", 00:05:35.207 "notify_get_notifications", 00:05:35.207 "notify_get_types", 00:05:35.207 "spdk_get_version", 00:05:35.207 "rpc_get_methods" 00:05:35.207 ] 00:05:35.207 08:04:32 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:35.207 08:04:32 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:35.207 08:04:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:35.207 08:04:32 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:35.207 08:04:32 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1736003 00:05:35.207 08:04:32 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1736003 ']' 00:05:35.207 08:04:32 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1736003 00:05:35.207 08:04:32 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:35.207 08:04:32 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.207 08:04:32 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1736003 00:05:35.207 08:04:32 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.207 08:04:32 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.207 08:04:32 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1736003' 00:05:35.207 killing process with pid 1736003 00:05:35.207 08:04:32 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1736003 00:05:35.207 08:04:32 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1736003 00:05:35.468 00:05:35.468 real 0m1.499s 00:05:35.468 user 0m2.726s 00:05:35.468 sys 0m0.451s 00:05:35.468 08:04:32 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.468 08:04:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:35.468 ************************************ 00:05:35.468 END TEST spdkcli_tcp 00:05:35.468 ************************************ 00:05:35.468 08:04:32 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:35.468 08:04:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.468 08:04:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.468 08:04:32 -- common/autotest_common.sh@10 -- # set +x 00:05:35.468 ************************************ 00:05:35.468 START TEST dpdk_mem_utility 00:05:35.468 ************************************ 00:05:35.468 08:04:32 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:35.730 * Looking for test storage... 00:05:35.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:35.730 08:04:32 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:35.730 08:04:32 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:35.730 08:04:32 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:35.730 08:04:32 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:35.730 08:04:32 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.730 08:04:32 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.730 08:04:32 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.730 08:04:32 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.730 08:04:32 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.730 08:04:32 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.730 08:04:32 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.730 08:04:32 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.730 08:04:32 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.730 08:04:32 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.730 08:04:32 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.730 08:04:32 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:35.730 08:04:32 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:35.730 08:04:32 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.730 08:04:32 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.730 08:04:32 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:35.730 08:04:32 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:35.730 08:04:32 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.730 08:04:32 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:35.730 08:04:32 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.730 08:04:32 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:35.730 08:04:32 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:35.730 08:04:32 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.730 08:04:32 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:35.730 08:04:32 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.730 08:04:32 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.730 08:04:32 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.730 08:04:32 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:35.730 08:04:32 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.730 08:04:32 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:35.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.730 --rc genhtml_branch_coverage=1 00:05:35.730 --rc genhtml_function_coverage=1 00:05:35.730 --rc genhtml_legend=1 00:05:35.730 --rc geninfo_all_blocks=1 00:05:35.730 --rc geninfo_unexecuted_blocks=1 00:05:35.730 00:05:35.730 ' 00:05:35.730 08:04:32 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:35.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.730 --rc genhtml_branch_coverage=1 00:05:35.730 --rc genhtml_function_coverage=1 00:05:35.730 --rc genhtml_legend=1 00:05:35.730 --rc geninfo_all_blocks=1 00:05:35.730 --rc geninfo_unexecuted_blocks=1 00:05:35.730 00:05:35.730 ' 00:05:35.730 08:04:32 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:35.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.730 --rc genhtml_branch_coverage=1 00:05:35.730 --rc genhtml_function_coverage=1 00:05:35.730 --rc genhtml_legend=1 00:05:35.730 --rc geninfo_all_blocks=1 00:05:35.730 --rc geninfo_unexecuted_blocks=1 00:05:35.730 00:05:35.730 ' 00:05:35.730 08:04:32 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:35.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.730 --rc genhtml_branch_coverage=1 00:05:35.730 --rc genhtml_function_coverage=1 00:05:35.730 --rc genhtml_legend=1 00:05:35.730 --rc geninfo_all_blocks=1 00:05:35.730 --rc geninfo_unexecuted_blocks=1 00:05:35.730 00:05:35.730 ' 00:05:35.730 08:04:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:35.730 08:04:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1736409 00:05:35.730 08:04:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1736409 00:05:35.730 08:04:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.730 08:04:32 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1736409 ']' 00:05:35.730 08:04:32 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.730 08:04:32 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.730 08:04:32 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.731 08:04:32 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.731 08:04:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:35.731 [2024-11-28 08:04:32.966486] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:05:35.731 [2024-11-28 08:04:32.966560] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1736409 ] 00:05:35.992 [2024-11-28 08:04:33.057310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.992 [2024-11-28 08:04:33.096396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.562 08:04:33 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.562 08:04:33 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:36.562 08:04:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:36.562 08:04:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:36.562 08:04:33 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.562 08:04:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:36.562 { 00:05:36.562 "filename": "/tmp/spdk_mem_dump.txt" 00:05:36.562 } 00:05:36.562 08:04:33 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.562 08:04:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:36.562 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:36.562 1 heaps totaling size 818.000000 MiB 00:05:36.562 size: 818.000000 MiB heap id: 0 00:05:36.562 end heaps---------- 00:05:36.562 9 mempools totaling size 603.782043 MiB 00:05:36.562 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:36.562 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:36.562 size: 100.555481 MiB name: bdev_io_1736409 00:05:36.562 size: 50.003479 MiB name: msgpool_1736409 00:05:36.562 size: 36.509338 MiB name: fsdev_io_1736409 00:05:36.562 size: 21.763794 MiB name: PDU_Pool 00:05:36.562 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:36.562 size: 4.133484 MiB name: evtpool_1736409 00:05:36.562 size: 0.026123 MiB name: Session_Pool 00:05:36.562 end mempools------- 00:05:36.562 6 memzones totaling size 4.142822 MiB 00:05:36.562 size: 1.000366 MiB name: RG_ring_0_1736409 00:05:36.562 size: 1.000366 MiB name: RG_ring_1_1736409 00:05:36.562 size: 1.000366 MiB name: RG_ring_4_1736409 00:05:36.562 size: 1.000366 MiB name: RG_ring_5_1736409 00:05:36.562 size: 0.125366 MiB name: RG_ring_2_1736409 00:05:36.562 size: 0.015991 MiB name: RG_ring_3_1736409 00:05:36.562 end memzones------- 00:05:36.562 08:04:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:36.824 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:36.824 list of free elements. size: 10.852478 MiB 00:05:36.824 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:36.824 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:36.824 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:36.824 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:36.824 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:36.824 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:36.824 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:36.824 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:36.824 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:05:36.824 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:36.824 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:36.824 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:36.824 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:36.824 element at address: 0x200028200000 with size: 0.410034 MiB 00:05:36.824 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:36.824 list of standard malloc elements. size: 199.218628 MiB 00:05:36.824 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:36.824 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:36.824 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:36.824 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:36.824 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:36.824 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:36.824 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:36.824 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:36.824 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:36.824 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:36.824 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:36.824 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:36.824 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:36.824 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:36.824 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:36.824 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:36.824 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:36.824 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:36.824 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:36.824 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:36.824 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:36.824 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:36.824 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:36.824 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:36.824 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:36.824 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:36.824 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:36.824 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:36.824 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:36.824 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:36.824 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:36.824 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:36.824 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:36.824 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:36.824 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:36.824 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:36.824 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:36.824 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:36.824 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:36.824 element at address: 0x200028268f80 with size: 0.000183 MiB 00:05:36.824 element at address: 0x200028269040 with size: 0.000183 MiB 00:05:36.824 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:05:36.824 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:36.824 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:36.824 list of memzone associated elements. size: 607.928894 MiB 00:05:36.824 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:36.824 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:36.824 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:36.824 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:36.824 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:36.824 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1736409_0 00:05:36.824 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:36.824 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1736409_0 00:05:36.824 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:36.824 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1736409_0 00:05:36.824 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:36.824 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:36.824 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:36.824 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:36.824 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:36.824 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1736409_0 00:05:36.824 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:36.824 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1736409 00:05:36.824 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:36.824 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1736409 00:05:36.824 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:36.824 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:36.825 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:36.825 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:36.825 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:36.825 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:36.825 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:36.825 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:36.825 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:36.825 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1736409 00:05:36.825 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:36.825 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1736409 00:05:36.825 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:36.825 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1736409 00:05:36.825 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:36.825 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1736409 00:05:36.825 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:36.825 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1736409 00:05:36.825 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:36.825 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1736409 00:05:36.825 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:36.825 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:36.825 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:36.825 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:36.825 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:36.825 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:36.825 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:36.825 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1736409 00:05:36.825 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:36.825 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1736409 00:05:36.825 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:36.825 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:36.825 element at address: 0x200028269100 with size: 0.023743 MiB 00:05:36.825 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:36.825 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:36.825 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1736409 00:05:36.825 element at address: 0x20002826f240 with size: 0.002441 MiB 00:05:36.825 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:36.825 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:36.825 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1736409 00:05:36.825 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:36.825 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1736409 00:05:36.825 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:36.825 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1736409 00:05:36.825 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:05:36.825 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:36.825 08:04:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:36.825 08:04:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1736409 00:05:36.825 08:04:33 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1736409 ']' 00:05:36.825 08:04:33 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1736409 00:05:36.825 08:04:33 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:36.825 08:04:33 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.825 08:04:33 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1736409 00:05:36.825 08:04:33 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.825 08:04:33 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.825 08:04:33 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1736409' 00:05:36.825 killing process with pid 1736409 00:05:36.825 08:04:33 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1736409 00:05:36.825 08:04:33 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1736409 00:05:37.087 00:05:37.087 real 0m1.424s 00:05:37.087 user 0m1.512s 00:05:37.087 sys 0m0.428s 00:05:37.087 08:04:34 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.087 08:04:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:37.087 ************************************ 00:05:37.087 END TEST dpdk_mem_utility 00:05:37.087 ************************************ 00:05:37.087 08:04:34 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:37.087 08:04:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.087 08:04:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.087 08:04:34 -- common/autotest_common.sh@10 -- # set +x 00:05:37.087 ************************************ 00:05:37.087 START TEST event 00:05:37.087 ************************************ 00:05:37.087 08:04:34 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:37.087 * Looking for test storage... 00:05:37.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:37.087 08:04:34 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:37.087 08:04:34 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:37.087 08:04:34 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:37.348 08:04:34 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:37.348 08:04:34 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.348 08:04:34 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.348 08:04:34 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.348 08:04:34 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.348 08:04:34 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.348 08:04:34 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.348 08:04:34 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.348 08:04:34 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.348 08:04:34 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.348 08:04:34 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.348 08:04:34 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.348 08:04:34 event -- scripts/common.sh@344 -- # case "$op" in 00:05:37.348 08:04:34 event -- scripts/common.sh@345 -- # : 1 00:05:37.348 08:04:34 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.348 08:04:34 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.348 08:04:34 event -- scripts/common.sh@365 -- # decimal 1 00:05:37.348 08:04:34 event -- scripts/common.sh@353 -- # local d=1 00:05:37.348 08:04:34 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.348 08:04:34 event -- scripts/common.sh@355 -- # echo 1 00:05:37.348 08:04:34 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.348 08:04:34 event -- scripts/common.sh@366 -- # decimal 2 00:05:37.348 08:04:34 event -- scripts/common.sh@353 -- # local d=2 00:05:37.348 08:04:34 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.348 08:04:34 event -- scripts/common.sh@355 -- # echo 2 00:05:37.348 08:04:34 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.348 08:04:34 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.348 08:04:34 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.348 08:04:34 event -- scripts/common.sh@368 -- # return 0 00:05:37.348 08:04:34 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.348 08:04:34 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:37.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.348 --rc genhtml_branch_coverage=1 00:05:37.348 --rc genhtml_function_coverage=1 00:05:37.348 --rc genhtml_legend=1 00:05:37.348 --rc geninfo_all_blocks=1 00:05:37.348 --rc geninfo_unexecuted_blocks=1 00:05:37.348 00:05:37.348 ' 00:05:37.348 08:04:34 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:37.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.348 --rc genhtml_branch_coverage=1 00:05:37.348 --rc genhtml_function_coverage=1 00:05:37.348 --rc genhtml_legend=1 00:05:37.348 --rc geninfo_all_blocks=1 00:05:37.348 --rc geninfo_unexecuted_blocks=1 00:05:37.348 00:05:37.348 ' 00:05:37.348 08:04:34 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:37.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.348 --rc genhtml_branch_coverage=1 00:05:37.348 --rc genhtml_function_coverage=1 00:05:37.348 --rc genhtml_legend=1 00:05:37.348 --rc geninfo_all_blocks=1 00:05:37.348 --rc geninfo_unexecuted_blocks=1 00:05:37.348 00:05:37.349 ' 00:05:37.349 08:04:34 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:37.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.349 --rc genhtml_branch_coverage=1 00:05:37.349 --rc genhtml_function_coverage=1 00:05:37.349 --rc genhtml_legend=1 00:05:37.349 --rc geninfo_all_blocks=1 00:05:37.349 --rc geninfo_unexecuted_blocks=1 00:05:37.349 00:05:37.349 ' 00:05:37.349 08:04:34 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:37.349 08:04:34 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:37.349 08:04:34 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:37.349 08:04:34 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:37.349 08:04:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.349 08:04:34 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.349 ************************************ 00:05:37.349 START TEST event_perf 00:05:37.349 ************************************ 00:05:37.349 08:04:34 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:37.349 Running I/O for 1 seconds...[2024-11-28 08:04:34.459280] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:05:37.349 [2024-11-28 08:04:34.459393] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1736814 ] 00:05:37.349 [2024-11-28 08:04:34.551145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:37.349 [2024-11-28 08:04:34.594179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.349 [2024-11-28 08:04:34.594314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:37.349 [2024-11-28 08:04:34.594558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.349 Running I/O for 1 seconds...[2024-11-28 08:04:34.594559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:38.734 00:05:38.734 lcore 0: 177499 00:05:38.734 lcore 1: 177501 00:05:38.734 lcore 2: 177498 00:05:38.734 lcore 3: 177498 00:05:38.734 done. 00:05:38.734 00:05:38.734 real 0m1.184s 00:05:38.734 user 0m4.098s 00:05:38.734 sys 0m0.082s 00:05:38.734 08:04:35 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.734 08:04:35 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:38.734 ************************************ 00:05:38.734 END TEST event_perf 00:05:38.734 ************************************ 00:05:38.734 08:04:35 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:38.734 08:04:35 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:38.734 08:04:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.734 08:04:35 event -- common/autotest_common.sh@10 -- # set +x 00:05:38.734 ************************************ 00:05:38.734 START TEST event_reactor 00:05:38.734 ************************************ 00:05:38.734 08:04:35 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:38.734 [2024-11-28 08:04:35.724976] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:05:38.734 [2024-11-28 08:04:35.725071] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1737169 ] 00:05:38.734 [2024-11-28 08:04:35.814525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.734 [2024-11-28 08:04:35.848405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.675 test_start 00:05:39.675 oneshot 00:05:39.675 tick 100 00:05:39.675 tick 100 00:05:39.675 tick 250 00:05:39.675 tick 100 00:05:39.675 tick 100 00:05:39.675 tick 100 00:05:39.675 tick 250 00:05:39.675 tick 500 00:05:39.675 tick 100 00:05:39.675 tick 100 00:05:39.675 tick 250 00:05:39.675 tick 100 00:05:39.675 tick 100 00:05:39.675 test_end 00:05:39.675 00:05:39.675 real 0m1.171s 00:05:39.675 user 0m1.078s 00:05:39.675 sys 0m0.089s 00:05:39.675 08:04:36 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.675 08:04:36 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:39.675 ************************************ 00:05:39.675 END TEST event_reactor 00:05:39.675 ************************************ 00:05:39.675 08:04:36 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:39.675 08:04:36 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:39.675 08:04:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.675 08:04:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.675 ************************************ 00:05:39.675 START TEST event_reactor_perf 00:05:39.675 ************************************ 00:05:39.675 08:04:36 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:39.936 [2024-11-28 08:04:36.976600] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:05:39.936 [2024-11-28 08:04:36.976708] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1737362 ] 00:05:39.936 [2024-11-28 08:04:37.065966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.936 [2024-11-28 08:04:37.104891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.878 test_start 00:05:40.878 test_end 00:05:40.878 Performance: 530926 events per second 00:05:40.878 00:05:40.878 real 0m1.176s 00:05:40.878 user 0m1.093s 00:05:40.878 sys 0m0.080s 00:05:40.878 08:04:38 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.878 08:04:38 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:40.878 ************************************ 00:05:40.878 END TEST event_reactor_perf 00:05:40.878 ************************************ 00:05:41.140 08:04:38 event -- event/event.sh@49 -- # uname -s 00:05:41.140 08:04:38 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:41.140 08:04:38 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:41.140 08:04:38 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.140 08:04:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.140 08:04:38 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.140 ************************************ 00:05:41.140 START TEST event_scheduler 00:05:41.140 ************************************ 00:05:41.140 08:04:38 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:41.140 * Looking for test storage... 00:05:41.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:41.140 08:04:38 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:41.140 08:04:38 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:41.140 08:04:38 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:41.140 08:04:38 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:41.140 08:04:38 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.140 08:04:38 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.140 08:04:38 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.140 08:04:38 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.140 08:04:38 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.140 08:04:38 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.140 08:04:38 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.140 08:04:38 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.140 08:04:38 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.140 08:04:38 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.140 08:04:38 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.140 08:04:38 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:41.140 08:04:38 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:41.140 08:04:38 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.140 08:04:38 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.140 08:04:38 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:41.140 08:04:38 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:41.140 08:04:38 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.140 08:04:38 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:41.140 08:04:38 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.140 08:04:38 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:41.140 08:04:38 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:41.140 08:04:38 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.140 08:04:38 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:41.140 08:04:38 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.140 08:04:38 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.140 08:04:38 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.140 08:04:38 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:41.140 08:04:38 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.140 08:04:38 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:41.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.140 --rc genhtml_branch_coverage=1 00:05:41.140 --rc genhtml_function_coverage=1 00:05:41.140 --rc genhtml_legend=1 00:05:41.140 --rc geninfo_all_blocks=1 00:05:41.140 --rc geninfo_unexecuted_blocks=1 00:05:41.140 00:05:41.140 ' 00:05:41.140 08:04:38 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:41.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.140 --rc genhtml_branch_coverage=1 00:05:41.140 --rc genhtml_function_coverage=1 00:05:41.140 --rc genhtml_legend=1 00:05:41.140 --rc geninfo_all_blocks=1 00:05:41.140 --rc geninfo_unexecuted_blocks=1 00:05:41.140 00:05:41.140 ' 00:05:41.140 08:04:38 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:41.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.140 --rc genhtml_branch_coverage=1 00:05:41.140 --rc genhtml_function_coverage=1 00:05:41.140 --rc genhtml_legend=1 00:05:41.140 --rc geninfo_all_blocks=1 00:05:41.140 --rc geninfo_unexecuted_blocks=1 00:05:41.140 00:05:41.140 ' 00:05:41.140 08:04:38 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:41.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.140 --rc genhtml_branch_coverage=1 00:05:41.140 --rc genhtml_function_coverage=1 00:05:41.140 --rc genhtml_legend=1 00:05:41.140 --rc geninfo_all_blocks=1 00:05:41.140 --rc geninfo_unexecuted_blocks=1 00:05:41.140 00:05:41.140 ' 00:05:41.140 08:04:38 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:41.140 08:04:38 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1737618 00:05:41.140 08:04:38 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.140 08:04:38 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1737618 00:05:41.140 08:04:38 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:41.140 08:04:38 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1737618 ']' 00:05:41.140 08:04:38 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.140 08:04:38 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.140 08:04:38 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.140 08:04:38 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.140 08:04:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:41.401 [2024-11-28 08:04:38.470971] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:05:41.401 [2024-11-28 08:04:38.471049] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1737618 ] 00:05:41.401 [2024-11-28 08:04:38.564615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:41.401 [2024-11-28 08:04:38.621040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.401 [2024-11-28 08:04:38.621216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.401 [2024-11-28 08:04:38.621315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.401 [2024-11-28 08:04:38.621314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:42.362 08:04:39 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.362 08:04:39 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:42.362 08:04:39 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:42.362 08:04:39 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.362 08:04:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.362 [2024-11-28 08:04:39.295724] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:42.362 [2024-11-28 08:04:39.295743] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:42.362 [2024-11-28 08:04:39.295753] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:42.362 [2024-11-28 08:04:39.295759] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:42.362 [2024-11-28 08:04:39.295764] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:42.362 08:04:39 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.362 08:04:39 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:42.362 08:04:39 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.362 08:04:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.362 [2024-11-28 08:04:39.363537] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:42.362 08:04:39 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.362 08:04:39 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:42.362 08:04:39 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.362 08:04:39 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.362 08:04:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.362 ************************************ 00:05:42.362 START TEST scheduler_create_thread 00:05:42.362 ************************************ 00:05:42.362 08:04:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:42.362 08:04:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:42.362 08:04:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.362 08:04:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.362 2 00:05:42.362 08:04:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.362 08:04:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:42.362 08:04:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.362 08:04:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.362 3 00:05:42.362 08:04:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.362 08:04:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:42.362 08:04:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.362 08:04:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.362 4 00:05:42.362 08:04:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.362 08:04:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:42.362 08:04:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.362 08:04:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.362 5 00:05:42.362 08:04:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.362 08:04:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:42.362 08:04:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.362 08:04:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.362 6 00:05:42.362 08:04:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.362 08:04:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:42.363 08:04:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.363 08:04:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.363 7 00:05:42.363 08:04:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.363 08:04:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:42.363 08:04:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.363 08:04:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.363 8 00:05:42.363 08:04:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.363 08:04:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:42.363 08:04:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.363 08:04:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.363 9 00:05:42.363 08:04:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.363 08:04:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:42.363 08:04:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.363 08:04:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.994 10 00:05:42.994 08:04:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.994 08:04:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:42.994 08:04:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.994 08:04:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.376 08:04:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.376 08:04:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:44.376 08:04:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:44.376 08:04:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.376 08:04:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.947 08:04:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.947 08:04:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:44.947 08:04:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.947 08:04:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.889 08:04:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.889 08:04:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:45.889 08:04:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:45.889 08:04:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.889 08:04:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.460 08:04:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.460 00:05:46.460 real 0m4.226s 00:05:46.460 user 0m0.026s 00:05:46.460 sys 0m0.006s 00:05:46.460 08:04:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.460 08:04:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:46.460 ************************************ 00:05:46.460 END TEST scheduler_create_thread 00:05:46.460 ************************************ 00:05:46.460 08:04:43 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:46.460 08:04:43 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1737618 00:05:46.460 08:04:43 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1737618 ']' 00:05:46.460 08:04:43 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1737618 00:05:46.460 08:04:43 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:46.460 08:04:43 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.460 08:04:43 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1737618 00:05:46.460 08:04:43 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:46.460 08:04:43 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:46.460 08:04:43 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1737618' 00:05:46.460 killing process with pid 1737618 00:05:46.460 08:04:43 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1737618 00:05:46.460 08:04:43 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1737618 00:05:46.721 [2024-11-28 08:04:43.905085] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:46.981 00:05:46.981 real 0m5.848s 00:05:46.981 user 0m12.918s 00:05:46.981 sys 0m0.424s 00:05:46.981 08:04:44 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.981 08:04:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:46.981 ************************************ 00:05:46.981 END TEST event_scheduler 00:05:46.981 ************************************ 00:05:46.981 08:04:44 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:46.981 08:04:44 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:46.981 08:04:44 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.981 08:04:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.981 08:04:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.981 ************************************ 00:05:46.981 START TEST app_repeat 00:05:46.981 ************************************ 00:05:46.981 08:04:44 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:46.981 08:04:44 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.981 08:04:44 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.982 08:04:44 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:46.982 08:04:44 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.982 08:04:44 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:46.982 08:04:44 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:46.982 08:04:44 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:46.982 08:04:44 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1738993 00:05:46.982 08:04:44 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.982 08:04:44 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:46.982 08:04:44 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1738993' 00:05:46.982 Process app_repeat pid: 1738993 00:05:46.982 08:04:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:46.982 08:04:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:46.982 spdk_app_start Round 0 00:05:46.982 08:04:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1738993 /var/tmp/spdk-nbd.sock 00:05:46.982 08:04:44 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1738993 ']' 00:05:46.982 08:04:44 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:46.982 08:04:44 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.982 08:04:44 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:46.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:46.982 08:04:44 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.982 08:04:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:46.982 [2024-11-28 08:04:44.182624] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:05:46.982 [2024-11-28 08:04:44.182694] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1738993 ] 00:05:46.982 [2024-11-28 08:04:44.266462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.242 [2024-11-28 08:04:44.299116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.242 [2024-11-28 08:04:44.299117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.242 08:04:44 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.242 08:04:44 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:47.242 08:04:44 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:47.501 Malloc0 00:05:47.501 08:04:44 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:47.501 Malloc1 00:05:47.501 08:04:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:47.501 08:04:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.501 08:04:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.501 08:04:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:47.501 08:04:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.501 08:04:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:47.501 08:04:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:47.501 08:04:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.501 08:04:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.501 08:04:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:47.501 08:04:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.501 08:04:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:47.501 08:04:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:47.501 08:04:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:47.501 08:04:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.501 08:04:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:47.761 /dev/nbd0 00:05:47.761 08:04:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:47.761 08:04:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:47.761 08:04:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:47.761 08:04:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:47.761 08:04:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:47.761 08:04:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:47.761 08:04:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:47.761 08:04:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:47.761 08:04:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:47.761 08:04:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:47.761 08:04:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:47.761 1+0 records in 00:05:47.761 1+0 records out 00:05:47.761 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275882 s, 14.8 MB/s 00:05:47.761 08:04:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:47.761 08:04:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:47.761 08:04:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:47.761 08:04:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:47.761 08:04:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:47.761 08:04:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.761 08:04:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.761 08:04:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:48.021 /dev/nbd1 00:05:48.021 08:04:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:48.021 08:04:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:48.021 08:04:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:48.021 08:04:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:48.021 08:04:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:48.021 08:04:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:48.021 08:04:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:48.021 08:04:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:48.021 08:04:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:48.021 08:04:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:48.021 08:04:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.021 1+0 records in 00:05:48.021 1+0 records out 00:05:48.021 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277895 s, 14.7 MB/s 00:05:48.021 08:04:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.021 08:04:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:48.021 08:04:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:48.021 08:04:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:48.021 08:04:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:48.021 08:04:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.021 08:04:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.021 08:04:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.021 08:04:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.021 08:04:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.281 08:04:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:48.281 { 00:05:48.281 "nbd_device": "/dev/nbd0", 00:05:48.281 "bdev_name": "Malloc0" 00:05:48.281 }, 00:05:48.281 { 00:05:48.281 "nbd_device": "/dev/nbd1", 00:05:48.281 "bdev_name": "Malloc1" 00:05:48.281 } 00:05:48.281 ]' 00:05:48.281 08:04:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:48.281 { 00:05:48.281 "nbd_device": "/dev/nbd0", 00:05:48.281 "bdev_name": "Malloc0" 00:05:48.281 }, 00:05:48.281 { 00:05:48.281 "nbd_device": "/dev/nbd1", 00:05:48.281 "bdev_name": "Malloc1" 00:05:48.281 } 00:05:48.281 ]' 00:05:48.281 08:04:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.281 08:04:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:48.281 /dev/nbd1' 00:05:48.281 08:04:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:48.281 /dev/nbd1' 00:05:48.281 08:04:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.281 08:04:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:48.281 08:04:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:48.281 08:04:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:48.281 08:04:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:48.281 08:04:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:48.281 08:04:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.281 08:04:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.281 08:04:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:48.281 08:04:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:48.281 08:04:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:48.281 08:04:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:48.281 256+0 records in 00:05:48.281 256+0 records out 00:05:48.281 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012783 s, 82.0 MB/s 00:05:48.281 08:04:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.281 08:04:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:48.281 256+0 records in 00:05:48.281 256+0 records out 00:05:48.281 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114935 s, 91.2 MB/s 00:05:48.281 08:04:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.281 08:04:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:48.281 256+0 records in 00:05:48.281 256+0 records out 00:05:48.281 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138595 s, 75.7 MB/s 00:05:48.281 08:04:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:48.281 08:04:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.282 08:04:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.282 08:04:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:48.282 08:04:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:48.282 08:04:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:48.282 08:04:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:48.282 08:04:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.282 08:04:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:48.282 08:04:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.282 08:04:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:48.282 08:04:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:48.282 08:04:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:48.282 08:04:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.282 08:04:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.282 08:04:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:48.282 08:04:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:48.282 08:04:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:48.282 08:04:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:48.542 08:04:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:48.542 08:04:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:48.542 08:04:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:48.542 08:04:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:48.542 08:04:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:48.542 08:04:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:48.542 08:04:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:48.542 08:04:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:48.542 08:04:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:48.542 08:04:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:48.855 08:04:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:48.855 08:04:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:48.855 08:04:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:48.855 08:04:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:48.855 08:04:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:48.855 08:04:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:48.855 08:04:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:48.855 08:04:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:48.855 08:04:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.855 08:04:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.855 08:04:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.116 08:04:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:49.116 08:04:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:49.116 08:04:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.116 08:04:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:49.116 08:04:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:49.116 08:04:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.116 08:04:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:49.116 08:04:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:49.116 08:04:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:49.116 08:04:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:49.116 08:04:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:49.116 08:04:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:49.116 08:04:46 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:49.116 08:04:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:49.386 [2024-11-28 08:04:46.411769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.386 [2024-11-28 08:04:46.440994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.386 [2024-11-28 08:04:46.440995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.386 [2024-11-28 08:04:46.469993] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:49.386 [2024-11-28 08:04:46.470024] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:52.689 08:04:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:52.690 08:04:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:52.690 spdk_app_start Round 1 00:05:52.690 08:04:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1738993 /var/tmp/spdk-nbd.sock 00:05:52.690 08:04:49 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1738993 ']' 00:05:52.690 08:04:49 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:52.690 08:04:49 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.690 08:04:49 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:52.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:52.690 08:04:49 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.690 08:04:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:52.690 08:04:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.690 08:04:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:52.690 08:04:49 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:52.690 Malloc0 00:05:52.690 08:04:49 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:52.690 Malloc1 00:05:52.690 08:04:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.690 08:04:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.690 08:04:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.690 08:04:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:52.690 08:04:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.690 08:04:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:52.690 08:04:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.690 08:04:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.690 08:04:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.690 08:04:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:52.690 08:04:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.690 08:04:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:52.690 08:04:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:52.690 08:04:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:52.690 08:04:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.690 08:04:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:52.952 /dev/nbd0 00:05:52.952 08:04:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:52.952 08:04:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:52.952 08:04:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:52.952 08:04:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:52.952 08:04:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:52.952 08:04:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:52.952 08:04:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:52.952 08:04:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:52.952 08:04:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:52.952 08:04:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:52.952 08:04:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.952 1+0 records in 00:05:52.952 1+0 records out 00:05:52.952 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275482 s, 14.9 MB/s 00:05:52.952 08:04:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:52.952 08:04:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:52.952 08:04:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:52.952 08:04:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:52.952 08:04:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:52.952 08:04:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.952 08:04:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.952 08:04:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:53.214 /dev/nbd1 00:05:53.214 08:04:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:53.214 08:04:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:53.214 08:04:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:53.214 08:04:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:53.214 08:04:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:53.214 08:04:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:53.214 08:04:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:53.214 08:04:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:53.214 08:04:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:53.214 08:04:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:53.214 08:04:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:53.214 1+0 records in 00:05:53.214 1+0 records out 00:05:53.214 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295885 s, 13.8 MB/s 00:05:53.214 08:04:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.214 08:04:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:53.214 08:04:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.214 08:04:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:53.214 08:04:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:53.214 08:04:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:53.214 08:04:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.214 08:04:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.214 08:04:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.214 08:04:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.475 08:04:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:53.476 { 00:05:53.476 "nbd_device": "/dev/nbd0", 00:05:53.476 "bdev_name": "Malloc0" 00:05:53.476 }, 00:05:53.476 { 00:05:53.476 "nbd_device": "/dev/nbd1", 00:05:53.476 "bdev_name": "Malloc1" 00:05:53.476 } 00:05:53.476 ]' 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:53.476 { 00:05:53.476 "nbd_device": "/dev/nbd0", 00:05:53.476 "bdev_name": "Malloc0" 00:05:53.476 }, 00:05:53.476 { 00:05:53.476 "nbd_device": "/dev/nbd1", 00:05:53.476 "bdev_name": "Malloc1" 00:05:53.476 } 00:05:53.476 ]' 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:53.476 /dev/nbd1' 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:53.476 /dev/nbd1' 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:53.476 256+0 records in 00:05:53.476 256+0 records out 00:05:53.476 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117218 s, 89.5 MB/s 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:53.476 256+0 records in 00:05:53.476 256+0 records out 00:05:53.476 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122943 s, 85.3 MB/s 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:53.476 256+0 records in 00:05:53.476 256+0 records out 00:05:53.476 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132351 s, 79.2 MB/s 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.476 08:04:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:53.736 08:04:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:53.736 08:04:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:53.736 08:04:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:53.736 08:04:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.736 08:04:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.736 08:04:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:53.736 08:04:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.736 08:04:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.736 08:04:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.736 08:04:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:53.997 08:04:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:53.997 08:04:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:53.997 08:04:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:53.997 08:04:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.997 08:04:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.997 08:04:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:53.997 08:04:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.997 08:04:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.997 08:04:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.997 08:04:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.997 08:04:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.997 08:04:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:53.997 08:04:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:53.997 08:04:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.997 08:04:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:54.259 08:04:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:54.259 08:04:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.259 08:04:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:54.259 08:04:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:54.259 08:04:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:54.259 08:04:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:54.259 08:04:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:54.259 08:04:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:54.259 08:04:51 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:54.259 08:04:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:54.521 [2024-11-28 08:04:51.559667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.521 [2024-11-28 08:04:51.587550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.521 [2024-11-28 08:04:51.587551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.521 [2024-11-28 08:04:51.617156] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:54.521 [2024-11-28 08:04:51.617191] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:57.827 08:04:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:57.827 08:04:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:57.827 spdk_app_start Round 2 00:05:57.827 08:04:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1738993 /var/tmp/spdk-nbd.sock 00:05:57.827 08:04:54 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1738993 ']' 00:05:57.827 08:04:54 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:57.827 08:04:54 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.827 08:04:54 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:57.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:57.827 08:04:54 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.827 08:04:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:57.827 08:04:54 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.827 08:04:54 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:57.827 08:04:54 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:57.827 Malloc0 00:05:57.827 08:04:54 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:57.827 Malloc1 00:05:57.827 08:04:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:57.827 08:04:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.827 08:04:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:57.827 08:04:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:57.827 08:04:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.827 08:04:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:57.827 08:04:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:57.827 08:04:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.827 08:04:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:57.827 08:04:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:57.827 08:04:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.827 08:04:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:57.827 08:04:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:57.827 08:04:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:57.827 08:04:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.827 08:04:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:58.086 /dev/nbd0 00:05:58.086 08:04:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:58.086 08:04:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:58.086 08:04:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:58.086 08:04:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:58.086 08:04:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:58.086 08:04:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:58.086 08:04:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:58.086 08:04:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:58.086 08:04:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:58.086 08:04:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:58.086 08:04:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.086 1+0 records in 00:05:58.086 1+0 records out 00:05:58.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287399 s, 14.3 MB/s 00:05:58.086 08:04:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.086 08:04:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:58.086 08:04:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.086 08:04:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:58.086 08:04:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:58.086 08:04:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.086 08:04:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.086 08:04:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:58.348 /dev/nbd1 00:05:58.348 08:04:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:58.348 08:04:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:58.348 08:04:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:58.348 08:04:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:58.348 08:04:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:58.348 08:04:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:58.348 08:04:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:58.348 08:04:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:58.348 08:04:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:58.348 08:04:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:58.348 08:04:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.348 1+0 records in 00:05:58.348 1+0 records out 00:05:58.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270068 s, 15.2 MB/s 00:05:58.348 08:04:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.348 08:04:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:58.348 08:04:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.348 08:04:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:58.348 08:04:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:58.348 08:04:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.348 08:04:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.348 08:04:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.348 08:04:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.348 08:04:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.609 08:04:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:58.609 { 00:05:58.609 "nbd_device": "/dev/nbd0", 00:05:58.609 "bdev_name": "Malloc0" 00:05:58.609 }, 00:05:58.609 { 00:05:58.609 "nbd_device": "/dev/nbd1", 00:05:58.609 "bdev_name": "Malloc1" 00:05:58.609 } 00:05:58.609 ]' 00:05:58.609 08:04:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:58.609 { 00:05:58.609 "nbd_device": "/dev/nbd0", 00:05:58.609 "bdev_name": "Malloc0" 00:05:58.609 }, 00:05:58.609 { 00:05:58.609 "nbd_device": "/dev/nbd1", 00:05:58.609 "bdev_name": "Malloc1" 00:05:58.609 } 00:05:58.609 ]' 00:05:58.609 08:04:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.609 08:04:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:58.609 /dev/nbd1' 00:05:58.609 08:04:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:58.609 /dev/nbd1' 00:05:58.609 08:04:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.609 08:04:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:58.609 08:04:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:58.609 08:04:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:58.609 08:04:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:58.609 08:04:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:58.609 08:04:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.609 08:04:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.609 08:04:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:58.609 08:04:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.609 08:04:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:58.609 08:04:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:58.609 256+0 records in 00:05:58.609 256+0 records out 00:05:58.609 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00623736 s, 168 MB/s 00:05:58.609 08:04:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.609 08:04:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:58.609 256+0 records in 00:05:58.609 256+0 records out 00:05:58.609 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013799 s, 76.0 MB/s 00:05:58.609 08:04:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:58.609 08:04:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:58.609 256+0 records in 00:05:58.609 256+0 records out 00:05:58.609 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134537 s, 77.9 MB/s 00:05:58.609 08:04:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:58.609 08:04:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.609 08:04:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.609 08:04:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:58.609 08:04:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.609 08:04:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:58.609 08:04:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:58.609 08:04:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.609 08:04:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:58.609 08:04:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.609 08:04:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:58.609 08:04:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:58.610 08:04:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:58.610 08:04:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.610 08:04:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.610 08:04:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:58.610 08:04:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:58.610 08:04:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.610 08:04:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:58.871 08:04:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:58.871 08:04:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:58.871 08:04:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:58.871 08:04:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.871 08:04:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.871 08:04:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:58.871 08:04:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:58.871 08:04:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.871 08:04:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.871 08:04:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:59.132 08:04:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:59.132 08:04:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:59.132 08:04:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:59.132 08:04:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.132 08:04:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.132 08:04:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:59.132 08:04:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.132 08:04:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.132 08:04:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.132 08:04:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.132 08:04:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.132 08:04:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:59.132 08:04:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:59.132 08:04:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.392 08:04:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:59.392 08:04:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:59.392 08:04:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.392 08:04:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:59.392 08:04:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:59.392 08:04:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:59.392 08:04:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:59.392 08:04:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:59.392 08:04:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:59.392 08:04:56 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:59.392 08:04:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:59.653 [2024-11-28 08:04:56.710754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.653 [2024-11-28 08:04:56.740140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.653 [2024-11-28 08:04:56.740141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.653 [2024-11-28 08:04:56.769173] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:59.653 [2024-11-28 08:04:56.769204] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:02.961 08:04:59 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1738993 /var/tmp/spdk-nbd.sock 00:06:02.961 08:04:59 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1738993 ']' 00:06:02.961 08:04:59 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:02.961 08:04:59 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.961 08:04:59 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:02.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:02.961 08:04:59 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.961 08:04:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:02.961 08:04:59 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.961 08:04:59 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:02.961 08:04:59 event.app_repeat -- event/event.sh@39 -- # killprocess 1738993 00:06:02.961 08:04:59 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1738993 ']' 00:06:02.961 08:04:59 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1738993 00:06:02.961 08:04:59 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:02.961 08:04:59 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.961 08:04:59 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1738993 00:06:02.961 08:04:59 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.961 08:04:59 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.961 08:04:59 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1738993' 00:06:02.961 killing process with pid 1738993 00:06:02.961 08:04:59 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1738993 00:06:02.961 08:04:59 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1738993 00:06:02.961 spdk_app_start is called in Round 0. 00:06:02.961 Shutdown signal received, stop current app iteration 00:06:02.961 Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 reinitialization... 00:06:02.961 spdk_app_start is called in Round 1. 00:06:02.961 Shutdown signal received, stop current app iteration 00:06:02.961 Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 reinitialization... 00:06:02.961 spdk_app_start is called in Round 2. 00:06:02.961 Shutdown signal received, stop current app iteration 00:06:02.961 Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 reinitialization... 00:06:02.961 spdk_app_start is called in Round 3. 00:06:02.961 Shutdown signal received, stop current app iteration 00:06:02.961 08:04:59 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:02.961 08:04:59 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:02.961 00:06:02.961 real 0m15.826s 00:06:02.961 user 0m34.776s 00:06:02.961 sys 0m2.285s 00:06:02.961 08:04:59 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.961 08:04:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:02.961 ************************************ 00:06:02.961 END TEST app_repeat 00:06:02.961 ************************************ 00:06:02.961 08:05:00 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:02.961 08:05:00 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:02.961 08:05:00 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.961 08:05:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.961 08:05:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.961 ************************************ 00:06:02.961 START TEST cpu_locks 00:06:02.961 ************************************ 00:06:02.961 08:05:00 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:02.961 * Looking for test storage... 00:06:02.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:02.961 08:05:00 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:02.961 08:05:00 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:02.961 08:05:00 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:02.961 08:05:00 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:02.961 08:05:00 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.961 08:05:00 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.961 08:05:00 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.961 08:05:00 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.961 08:05:00 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.961 08:05:00 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.961 08:05:00 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.961 08:05:00 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.961 08:05:00 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.961 08:05:00 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.961 08:05:00 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.961 08:05:00 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:02.961 08:05:00 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:02.961 08:05:00 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.961 08:05:00 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.961 08:05:00 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:02.961 08:05:00 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:02.961 08:05:00 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.961 08:05:00 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:02.961 08:05:00 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.961 08:05:00 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:03.223 08:05:00 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:03.223 08:05:00 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.223 08:05:00 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:03.223 08:05:00 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.223 08:05:00 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.223 08:05:00 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.223 08:05:00 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:03.223 08:05:00 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.223 08:05:00 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:03.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.223 --rc genhtml_branch_coverage=1 00:06:03.223 --rc genhtml_function_coverage=1 00:06:03.223 --rc genhtml_legend=1 00:06:03.223 --rc geninfo_all_blocks=1 00:06:03.223 --rc geninfo_unexecuted_blocks=1 00:06:03.223 00:06:03.223 ' 00:06:03.223 08:05:00 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:03.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.223 --rc genhtml_branch_coverage=1 00:06:03.223 --rc genhtml_function_coverage=1 00:06:03.223 --rc genhtml_legend=1 00:06:03.223 --rc geninfo_all_blocks=1 00:06:03.223 --rc geninfo_unexecuted_blocks=1 00:06:03.223 00:06:03.223 ' 00:06:03.223 08:05:00 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:03.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.223 --rc genhtml_branch_coverage=1 00:06:03.223 --rc genhtml_function_coverage=1 00:06:03.223 --rc genhtml_legend=1 00:06:03.223 --rc geninfo_all_blocks=1 00:06:03.223 --rc geninfo_unexecuted_blocks=1 00:06:03.223 00:06:03.223 ' 00:06:03.223 08:05:00 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:03.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.223 --rc genhtml_branch_coverage=1 00:06:03.223 --rc genhtml_function_coverage=1 00:06:03.223 --rc genhtml_legend=1 00:06:03.223 --rc geninfo_all_blocks=1 00:06:03.223 --rc geninfo_unexecuted_blocks=1 00:06:03.223 00:06:03.223 ' 00:06:03.223 08:05:00 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:03.223 08:05:00 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:03.223 08:05:00 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:03.223 08:05:00 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:03.223 08:05:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.223 08:05:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.223 08:05:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.223 ************************************ 00:06:03.223 START TEST default_locks 00:06:03.223 ************************************ 00:06:03.223 08:05:00 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:03.223 08:05:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1742275 00:06:03.223 08:05:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1742275 00:06:03.223 08:05:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:03.223 08:05:00 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1742275 ']' 00:06:03.223 08:05:00 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.223 08:05:00 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.223 08:05:00 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.223 08:05:00 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.223 08:05:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.223 [2024-11-28 08:05:00.362281] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:06:03.223 [2024-11-28 08:05:00.362343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1742275 ] 00:06:03.223 [2024-11-28 08:05:00.450889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.223 [2024-11-28 08:05:00.486939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.163 08:05:01 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.163 08:05:01 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:04.163 08:05:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1742275 00:06:04.163 08:05:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1742275 00:06:04.163 08:05:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.423 lslocks: write error 00:06:04.423 08:05:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1742275 00:06:04.423 08:05:01 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1742275 ']' 00:06:04.423 08:05:01 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1742275 00:06:04.423 08:05:01 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:04.423 08:05:01 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.423 08:05:01 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1742275 00:06:04.423 08:05:01 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.423 08:05:01 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.423 08:05:01 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1742275' 00:06:04.423 killing process with pid 1742275 00:06:04.423 08:05:01 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1742275 00:06:04.423 08:05:01 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1742275 00:06:04.683 08:05:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1742275 00:06:04.683 08:05:01 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:04.683 08:05:01 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1742275 00:06:04.683 08:05:01 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:04.683 08:05:01 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.683 08:05:01 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:04.683 08:05:01 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.683 08:05:01 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1742275 00:06:04.683 08:05:01 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1742275 ']' 00:06:04.683 08:05:01 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.683 08:05:01 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.683 08:05:01 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.683 08:05:01 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.684 08:05:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1742275) - No such process 00:06:04.684 ERROR: process (pid: 1742275) is no longer running 00:06:04.684 08:05:01 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.684 08:05:01 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:04.684 08:05:01 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:04.684 08:05:01 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:04.684 08:05:01 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:04.684 08:05:01 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:04.684 08:05:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:04.684 08:05:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:04.684 08:05:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:04.684 08:05:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:04.684 00:06:04.684 real 0m1.575s 00:06:04.684 user 0m1.688s 00:06:04.684 sys 0m0.564s 00:06:04.684 08:05:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.684 08:05:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.684 ************************************ 00:06:04.684 END TEST default_locks 00:06:04.684 ************************************ 00:06:04.684 08:05:01 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:04.684 08:05:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.684 08:05:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.684 08:05:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.684 ************************************ 00:06:04.684 START TEST default_locks_via_rpc 00:06:04.684 ************************************ 00:06:04.684 08:05:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:04.684 08:05:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1742633 00:06:04.684 08:05:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1742633 00:06:04.684 08:05:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.684 08:05:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1742633 ']' 00:06:04.684 08:05:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.684 08:05:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.684 08:05:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.684 08:05:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.684 08:05:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.944 [2024-11-28 08:05:02.002837] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:06:04.944 [2024-11-28 08:05:02.002889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1742633 ] 00:06:04.944 [2024-11-28 08:05:02.089093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.944 [2024-11-28 08:05:02.119747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.512 08:05:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.512 08:05:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:05.512 08:05:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:05.512 08:05:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.512 08:05:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.512 08:05:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.512 08:05:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:05.512 08:05:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:05.512 08:05:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:05.512 08:05:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:05.512 08:05:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:05.512 08:05:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.512 08:05:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.512 08:05:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.771 08:05:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1742633 00:06:05.771 08:05:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1742633 00:06:05.771 08:05:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.031 08:05:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1742633 00:06:06.031 08:05:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1742633 ']' 00:06:06.031 08:05:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1742633 00:06:06.031 08:05:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:06.031 08:05:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.031 08:05:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1742633 00:06:06.031 08:05:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.031 08:05:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.031 08:05:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1742633' 00:06:06.031 killing process with pid 1742633 00:06:06.031 08:05:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1742633 00:06:06.031 08:05:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1742633 00:06:06.292 00:06:06.292 real 0m1.546s 00:06:06.292 user 0m1.671s 00:06:06.292 sys 0m0.537s 00:06:06.292 08:05:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.292 08:05:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.292 ************************************ 00:06:06.292 END TEST default_locks_via_rpc 00:06:06.292 ************************************ 00:06:06.292 08:05:03 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:06.292 08:05:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.292 08:05:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.292 08:05:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.292 ************************************ 00:06:06.292 START TEST non_locking_app_on_locked_coremask 00:06:06.292 ************************************ 00:06:06.292 08:05:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:06.292 08:05:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1743001 00:06:06.292 08:05:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1743001 /var/tmp/spdk.sock 00:06:06.292 08:05:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:06.292 08:05:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1743001 ']' 00:06:06.292 08:05:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.292 08:05:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.292 08:05:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.292 08:05:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.292 08:05:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.552 [2024-11-28 08:05:03.622301] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:06:06.552 [2024-11-28 08:05:03.622351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1743001 ] 00:06:06.552 [2024-11-28 08:05:03.706107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.552 [2024-11-28 08:05:03.735235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.493 08:05:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.493 08:05:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:07.493 08:05:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1743334 00:06:07.493 08:05:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1743334 /var/tmp/spdk2.sock 00:06:07.493 08:05:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1743334 ']' 00:06:07.493 08:05:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:07.493 08:05:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.493 08:05:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.493 08:05:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.493 08:05:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.493 08:05:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.493 [2024-11-28 08:05:04.476656] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:06:07.493 [2024-11-28 08:05:04.476712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1743334 ] 00:06:07.493 [2024-11-28 08:05:04.566037] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:07.493 [2024-11-28 08:05:04.566065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.493 [2024-11-28 08:05:04.628183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.064 08:05:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.064 08:05:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:08.064 08:05:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1743001 00:06:08.064 08:05:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1743001 00:06:08.064 08:05:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:08.324 lslocks: write error 00:06:08.324 08:05:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1743001 00:06:08.324 08:05:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1743001 ']' 00:06:08.324 08:05:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1743001 00:06:08.324 08:05:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:08.324 08:05:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.324 08:05:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1743001 00:06:08.324 08:05:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.324 08:05:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.324 08:05:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1743001' 00:06:08.324 killing process with pid 1743001 00:06:08.324 08:05:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1743001 00:06:08.324 08:05:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1743001 00:06:08.896 08:05:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1743334 00:06:08.896 08:05:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1743334 ']' 00:06:08.896 08:05:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1743334 00:06:08.896 08:05:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:08.896 08:05:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.896 08:05:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1743334 00:06:08.896 08:05:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.896 08:05:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.896 08:05:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1743334' 00:06:08.896 killing process with pid 1743334 00:06:08.896 08:05:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1743334 00:06:08.896 08:05:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1743334 00:06:09.182 00:06:09.182 real 0m2.638s 00:06:09.182 user 0m2.971s 00:06:09.182 sys 0m0.765s 00:06:09.182 08:05:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.182 08:05:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.182 ************************************ 00:06:09.182 END TEST non_locking_app_on_locked_coremask 00:06:09.182 ************************************ 00:06:09.182 08:05:06 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:09.182 08:05:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.183 08:05:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.183 08:05:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.183 ************************************ 00:06:09.183 START TEST locking_app_on_unlocked_coremask 00:06:09.183 ************************************ 00:06:09.183 08:05:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:09.183 08:05:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1743699 00:06:09.183 08:05:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1743699 /var/tmp/spdk.sock 00:06:09.183 08:05:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:09.183 08:05:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1743699 ']' 00:06:09.183 08:05:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.183 08:05:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.183 08:05:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.183 08:05:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.183 08:05:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.183 [2024-11-28 08:05:06.326813] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:06:09.183 [2024-11-28 08:05:06.326859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1743699 ] 00:06:09.183 [2024-11-28 08:05:06.411000] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:09.183 [2024-11-28 08:05:06.411029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.183 [2024-11-28 08:05:06.440944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.128 08:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.128 08:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:10.128 08:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:10.128 08:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1743734 00:06:10.128 08:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1743734 /var/tmp/spdk2.sock 00:06:10.128 08:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1743734 ']' 00:06:10.128 08:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.128 08:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.128 08:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.128 08:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.128 08:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.128 [2024-11-28 08:05:07.185068] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:06:10.128 [2024-11-28 08:05:07.185120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1743734 ] 00:06:10.128 [2024-11-28 08:05:07.270705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.128 [2024-11-28 08:05:07.333299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.700 08:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.700 08:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:10.700 08:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1743734 00:06:10.700 08:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1743734 00:06:10.700 08:05:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.272 lslocks: write error 00:06:11.272 08:05:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1743699 00:06:11.272 08:05:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1743699 ']' 00:06:11.272 08:05:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1743699 00:06:11.272 08:05:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:11.272 08:05:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.272 08:05:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1743699 00:06:11.272 08:05:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.272 08:05:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.272 08:05:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1743699' 00:06:11.272 killing process with pid 1743699 00:06:11.272 08:05:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1743699 00:06:11.272 08:05:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1743699 00:06:11.842 08:05:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1743734 00:06:11.842 08:05:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1743734 ']' 00:06:11.842 08:05:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1743734 00:06:11.842 08:05:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:11.842 08:05:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.842 08:05:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1743734 00:06:11.842 08:05:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.842 08:05:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.842 08:05:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1743734' 00:06:11.842 killing process with pid 1743734 00:06:11.842 08:05:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1743734 00:06:11.842 08:05:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1743734 00:06:11.842 00:06:11.842 real 0m2.827s 00:06:11.842 user 0m3.166s 00:06:11.842 sys 0m0.863s 00:06:11.842 08:05:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.842 08:05:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.842 ************************************ 00:06:11.842 END TEST locking_app_on_unlocked_coremask 00:06:11.842 ************************************ 00:06:12.103 08:05:09 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:12.103 08:05:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.103 08:05:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.103 08:05:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.103 ************************************ 00:06:12.103 START TEST locking_app_on_locked_coremask 00:06:12.103 ************************************ 00:06:12.103 08:05:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:12.103 08:05:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1744263 00:06:12.103 08:05:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1744263 /var/tmp/spdk.sock 00:06:12.103 08:05:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.103 08:05:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1744263 ']' 00:06:12.103 08:05:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.103 08:05:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.103 08:05:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.103 08:05:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.103 08:05:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.103 [2024-11-28 08:05:09.230110] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:06:12.103 [2024-11-28 08:05:09.230175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1744263 ] 00:06:12.103 [2024-11-28 08:05:09.318016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.103 [2024-11-28 08:05:09.357504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.045 08:05:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.045 08:05:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:13.045 08:05:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1744425 00:06:13.045 08:05:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1744425 /var/tmp/spdk2.sock 00:06:13.045 08:05:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:13.045 08:05:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:13.045 08:05:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1744425 /var/tmp/spdk2.sock 00:06:13.045 08:05:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:13.045 08:05:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.045 08:05:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:13.045 08:05:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.045 08:05:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1744425 /var/tmp/spdk2.sock 00:06:13.045 08:05:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1744425 ']' 00:06:13.045 08:05:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.045 08:05:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.045 08:05:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.045 08:05:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.045 08:05:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.045 [2024-11-28 08:05:10.097930] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:06:13.045 [2024-11-28 08:05:10.097985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1744425 ] 00:06:13.045 [2024-11-28 08:05:10.184144] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1744263 has claimed it. 00:06:13.045 [2024-11-28 08:05:10.184182] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:13.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1744425) - No such process 00:06:13.616 ERROR: process (pid: 1744425) is no longer running 00:06:13.616 08:05:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.616 08:05:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:13.616 08:05:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:13.616 08:05:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:13.616 08:05:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:13.616 08:05:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:13.616 08:05:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1744263 00:06:13.616 08:05:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1744263 00:06:13.616 08:05:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.877 lslocks: write error 00:06:13.877 08:05:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1744263 00:06:13.877 08:05:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1744263 ']' 00:06:13.877 08:05:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1744263 00:06:13.877 08:05:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:13.877 08:05:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.877 08:05:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1744263 00:06:13.877 08:05:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.877 08:05:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.877 08:05:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1744263' 00:06:13.877 killing process with pid 1744263 00:06:13.877 08:05:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1744263 00:06:13.877 08:05:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1744263 00:06:14.138 00:06:14.138 real 0m2.141s 00:06:14.138 user 0m2.424s 00:06:14.138 sys 0m0.587s 00:06:14.138 08:05:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.138 08:05:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.138 ************************************ 00:06:14.138 END TEST locking_app_on_locked_coremask 00:06:14.138 ************************************ 00:06:14.138 08:05:11 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:14.138 08:05:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.138 08:05:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.138 08:05:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.138 ************************************ 00:06:14.138 START TEST locking_overlapped_coremask 00:06:14.138 ************************************ 00:06:14.138 08:05:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:14.138 08:05:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1744787 00:06:14.138 08:05:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1744787 /var/tmp/spdk.sock 00:06:14.138 08:05:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:14.138 08:05:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1744787 ']' 00:06:14.138 08:05:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.138 08:05:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.138 08:05:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.138 08:05:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.138 08:05:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.399 [2024-11-28 08:05:11.444565] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:06:14.399 [2024-11-28 08:05:11.444615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1744787 ] 00:06:14.399 [2024-11-28 08:05:11.530026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:14.399 [2024-11-28 08:05:11.563460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.399 [2024-11-28 08:05:11.563610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.399 [2024-11-28 08:05:11.563612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.971 08:05:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.971 08:05:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:14.971 08:05:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1744817 00:06:14.971 08:05:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1744817 /var/tmp/spdk2.sock 00:06:14.971 08:05:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:14.971 08:05:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:14.971 08:05:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1744817 /var/tmp/spdk2.sock 00:06:14.971 08:05:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:14.971 08:05:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.971 08:05:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:14.971 08:05:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.971 08:05:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1744817 /var/tmp/spdk2.sock 00:06:14.971 08:05:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1744817 ']' 00:06:14.971 08:05:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.971 08:05:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.971 08:05:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.971 08:05:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.971 08:05:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.232 [2024-11-28 08:05:12.310801] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:06:15.232 [2024-11-28 08:05:12.310859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1744817 ] 00:06:15.232 [2024-11-28 08:05:12.424712] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1744787 has claimed it. 00:06:15.232 [2024-11-28 08:05:12.424758] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:15.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1744817) - No such process 00:06:15.803 ERROR: process (pid: 1744817) is no longer running 00:06:15.803 08:05:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.803 08:05:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:15.803 08:05:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:15.803 08:05:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:15.803 08:05:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:15.803 08:05:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:15.803 08:05:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:15.803 08:05:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:15.803 08:05:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:15.803 08:05:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:15.803 08:05:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1744787 00:06:15.803 08:05:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1744787 ']' 00:06:15.803 08:05:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1744787 00:06:15.803 08:05:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:15.803 08:05:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.803 08:05:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1744787 00:06:15.804 08:05:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.804 08:05:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.804 08:05:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1744787' 00:06:15.804 killing process with pid 1744787 00:06:15.804 08:05:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1744787 00:06:15.804 08:05:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1744787 00:06:16.065 00:06:16.065 real 0m1.776s 00:06:16.065 user 0m5.159s 00:06:16.065 sys 0m0.387s 00:06:16.065 08:05:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.065 08:05:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.065 ************************************ 00:06:16.065 END TEST locking_overlapped_coremask 00:06:16.065 ************************************ 00:06:16.065 08:05:13 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:16.065 08:05:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.065 08:05:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.065 08:05:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.065 ************************************ 00:06:16.065 START TEST locking_overlapped_coremask_via_rpc 00:06:16.065 ************************************ 00:06:16.065 08:05:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:16.065 08:05:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1745164 00:06:16.065 08:05:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1745164 /var/tmp/spdk.sock 00:06:16.065 08:05:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:16.065 08:05:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1745164 ']' 00:06:16.065 08:05:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.065 08:05:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.065 08:05:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.065 08:05:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.065 08:05:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.065 [2024-11-28 08:05:13.296867] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:06:16.065 [2024-11-28 08:05:13.296920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1745164 ] 00:06:16.326 [2024-11-28 08:05:13.379536] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:16.326 [2024-11-28 08:05:13.379559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:16.326 [2024-11-28 08:05:13.411601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.326 [2024-11-28 08:05:13.411749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.326 [2024-11-28 08:05:13.411751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.079 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.079 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:17.079 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1745248 00:06:17.079 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1745248 /var/tmp/spdk2.sock 00:06:17.079 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1745248 ']' 00:06:17.079 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:17.079 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.079 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.079 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.080 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.080 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.080 [2024-11-28 08:05:14.131522] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:06:17.080 [2024-11-28 08:05:14.131577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1745248 ] 00:06:17.080 [2024-11-28 08:05:14.243693] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:17.080 [2024-11-28 08:05:14.243718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:17.080 [2024-11-28 08:05:14.317428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.080 [2024-11-28 08:05:14.321221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.080 [2024-11-28 08:05:14.321222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:17.652 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.652 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:17.652 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:17.652 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.652 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.652 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.652 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:17.652 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:17.652 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:17.652 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:17.652 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.652 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:17.652 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.652 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:17.652 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.652 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.652 [2024-11-28 08:05:14.930237] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1745164 has claimed it. 00:06:17.652 request: 00:06:17.652 { 00:06:17.652 "method": "framework_enable_cpumask_locks", 00:06:17.652 "req_id": 1 00:06:17.652 } 00:06:17.652 Got JSON-RPC error response 00:06:17.652 response: 00:06:17.652 { 00:06:17.652 "code": -32603, 00:06:17.652 "message": "Failed to claim CPU core: 2" 00:06:17.652 } 00:06:17.913 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:17.913 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:17.913 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:17.913 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:17.913 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:17.913 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1745164 /var/tmp/spdk.sock 00:06:17.913 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1745164 ']' 00:06:17.913 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.913 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.913 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.913 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.913 08:05:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.913 08:05:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.913 08:05:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:17.913 08:05:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1745248 /var/tmp/spdk2.sock 00:06:17.913 08:05:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1745248 ']' 00:06:17.913 08:05:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.913 08:05:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.913 08:05:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.913 08:05:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.913 08:05:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.175 08:05:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.175 08:05:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:18.175 08:05:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:18.175 08:05:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:18.175 08:05:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:18.175 08:05:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:18.175 00:06:18.175 real 0m2.066s 00:06:18.175 user 0m0.855s 00:06:18.175 sys 0m0.141s 00:06:18.175 08:05:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.175 08:05:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.175 ************************************ 00:06:18.175 END TEST locking_overlapped_coremask_via_rpc 00:06:18.175 ************************************ 00:06:18.175 08:05:15 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:18.175 08:05:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1745164 ]] 00:06:18.175 08:05:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1745164 00:06:18.175 08:05:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1745164 ']' 00:06:18.175 08:05:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1745164 00:06:18.175 08:05:15 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:18.175 08:05:15 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.175 08:05:15 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1745164 00:06:18.175 08:05:15 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.175 08:05:15 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.175 08:05:15 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1745164' 00:06:18.175 killing process with pid 1745164 00:06:18.175 08:05:15 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1745164 00:06:18.175 08:05:15 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1745164 00:06:18.436 08:05:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1745248 ]] 00:06:18.436 08:05:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1745248 00:06:18.436 08:05:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1745248 ']' 00:06:18.436 08:05:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1745248 00:06:18.436 08:05:15 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:18.436 08:05:15 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.436 08:05:15 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1745248 00:06:18.436 08:05:15 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:18.436 08:05:15 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:18.436 08:05:15 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1745248' 00:06:18.436 killing process with pid 1745248 00:06:18.436 08:05:15 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1745248 00:06:18.436 08:05:15 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1745248 00:06:18.696 08:05:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:18.696 08:05:15 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:18.696 08:05:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1745164 ]] 00:06:18.696 08:05:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1745164 00:06:18.696 08:05:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1745164 ']' 00:06:18.696 08:05:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1745164 00:06:18.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1745164) - No such process 00:06:18.696 08:05:15 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1745164 is not found' 00:06:18.696 Process with pid 1745164 is not found 00:06:18.696 08:05:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1745248 ]] 00:06:18.696 08:05:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1745248 00:06:18.696 08:05:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1745248 ']' 00:06:18.696 08:05:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1745248 00:06:18.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1745248) - No such process 00:06:18.696 08:05:15 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1745248 is not found' 00:06:18.696 Process with pid 1745248 is not found 00:06:18.696 08:05:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:18.696 00:06:18.696 real 0m15.815s 00:06:18.696 user 0m27.915s 00:06:18.696 sys 0m4.786s 00:06:18.696 08:05:15 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.696 08:05:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.696 ************************************ 00:06:18.696 END TEST cpu_locks 00:06:18.696 ************************************ 00:06:18.696 00:06:18.696 real 0m41.708s 00:06:18.696 user 1m22.186s 00:06:18.696 sys 0m8.165s 00:06:18.696 08:05:15 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.696 08:05:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.696 ************************************ 00:06:18.696 END TEST event 00:06:18.696 ************************************ 00:06:18.696 08:05:15 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:18.696 08:05:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.696 08:05:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.696 08:05:15 -- common/autotest_common.sh@10 -- # set +x 00:06:18.957 ************************************ 00:06:18.957 START TEST thread 00:06:18.957 ************************************ 00:06:18.957 08:05:15 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:18.957 * Looking for test storage... 00:06:18.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:18.957 08:05:16 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:18.957 08:05:16 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:18.957 08:05:16 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:18.957 08:05:16 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:18.957 08:05:16 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.957 08:05:16 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.957 08:05:16 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.957 08:05:16 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.957 08:05:16 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.957 08:05:16 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.957 08:05:16 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.957 08:05:16 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.957 08:05:16 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.957 08:05:16 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.957 08:05:16 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.957 08:05:16 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:18.957 08:05:16 thread -- scripts/common.sh@345 -- # : 1 00:06:18.957 08:05:16 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.957 08:05:16 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.957 08:05:16 thread -- scripts/common.sh@365 -- # decimal 1 00:06:18.957 08:05:16 thread -- scripts/common.sh@353 -- # local d=1 00:06:18.957 08:05:16 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.957 08:05:16 thread -- scripts/common.sh@355 -- # echo 1 00:06:18.957 08:05:16 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.957 08:05:16 thread -- scripts/common.sh@366 -- # decimal 2 00:06:18.957 08:05:16 thread -- scripts/common.sh@353 -- # local d=2 00:06:18.957 08:05:16 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.957 08:05:16 thread -- scripts/common.sh@355 -- # echo 2 00:06:18.957 08:05:16 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.957 08:05:16 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.957 08:05:16 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.957 08:05:16 thread -- scripts/common.sh@368 -- # return 0 00:06:18.957 08:05:16 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.957 08:05:16 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:18.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.957 --rc genhtml_branch_coverage=1 00:06:18.957 --rc genhtml_function_coverage=1 00:06:18.957 --rc genhtml_legend=1 00:06:18.957 --rc geninfo_all_blocks=1 00:06:18.957 --rc geninfo_unexecuted_blocks=1 00:06:18.957 00:06:18.957 ' 00:06:18.957 08:05:16 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:18.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.957 --rc genhtml_branch_coverage=1 00:06:18.957 --rc genhtml_function_coverage=1 00:06:18.957 --rc genhtml_legend=1 00:06:18.957 --rc geninfo_all_blocks=1 00:06:18.957 --rc geninfo_unexecuted_blocks=1 00:06:18.957 00:06:18.957 ' 00:06:18.957 08:05:16 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:18.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.957 --rc genhtml_branch_coverage=1 00:06:18.957 --rc genhtml_function_coverage=1 00:06:18.957 --rc genhtml_legend=1 00:06:18.957 --rc geninfo_all_blocks=1 00:06:18.957 --rc geninfo_unexecuted_blocks=1 00:06:18.957 00:06:18.957 ' 00:06:18.957 08:05:16 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:18.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.957 --rc genhtml_branch_coverage=1 00:06:18.957 --rc genhtml_function_coverage=1 00:06:18.957 --rc genhtml_legend=1 00:06:18.957 --rc geninfo_all_blocks=1 00:06:18.957 --rc geninfo_unexecuted_blocks=1 00:06:18.957 00:06:18.957 ' 00:06:18.957 08:05:16 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:18.957 08:05:16 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:18.957 08:05:16 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.957 08:05:16 thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.957 ************************************ 00:06:18.957 START TEST thread_poller_perf 00:06:18.957 ************************************ 00:06:18.957 08:05:16 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:19.216 [2024-11-28 08:05:16.252310] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:06:19.216 [2024-11-28 08:05:16.252425] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1745918 ] 00:06:19.217 [2024-11-28 08:05:16.342943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.217 [2024-11-28 08:05:16.382431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.217 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:20.156 [2024-11-28T07:05:17.445Z] ====================================== 00:06:20.156 [2024-11-28T07:05:17.445Z] busy:2410146148 (cyc) 00:06:20.156 [2024-11-28T07:05:17.445Z] total_run_count: 419000 00:06:20.156 [2024-11-28T07:05:17.445Z] tsc_hz: 2400000000 (cyc) 00:06:20.156 [2024-11-28T07:05:17.445Z] ====================================== 00:06:20.156 [2024-11-28T07:05:17.445Z] poller_cost: 5752 (cyc), 2396 (nsec) 00:06:20.156 00:06:20.156 real 0m1.186s 00:06:20.156 user 0m1.092s 00:06:20.156 sys 0m0.090s 00:06:20.156 08:05:17 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.156 08:05:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:20.156 ************************************ 00:06:20.156 END TEST thread_poller_perf 00:06:20.156 ************************************ 00:06:20.416 08:05:17 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:20.416 08:05:17 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:20.416 08:05:17 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.416 08:05:17 thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.416 ************************************ 00:06:20.416 START TEST thread_poller_perf 00:06:20.416 ************************************ 00:06:20.416 08:05:17 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:20.416 [2024-11-28 08:05:17.513857] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:06:20.416 [2024-11-28 08:05:17.513971] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1746042 ] 00:06:20.416 [2024-11-28 08:05:17.600928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.416 [2024-11-28 08:05:17.640230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.416 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:21.799 [2024-11-28T07:05:19.088Z] ====================================== 00:06:21.799 [2024-11-28T07:05:19.088Z] busy:2401610610 (cyc) 00:06:21.799 [2024-11-28T07:05:19.088Z] total_run_count: 5567000 00:06:21.799 [2024-11-28T07:05:19.088Z] tsc_hz: 2400000000 (cyc) 00:06:21.799 [2024-11-28T07:05:19.088Z] ====================================== 00:06:21.799 [2024-11-28T07:05:19.088Z] poller_cost: 431 (cyc), 179 (nsec) 00:06:21.799 00:06:21.799 real 0m1.176s 00:06:21.799 user 0m1.098s 00:06:21.799 sys 0m0.075s 00:06:21.799 08:05:18 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.799 08:05:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:21.799 ************************************ 00:06:21.799 END TEST thread_poller_perf 00:06:21.799 ************************************ 00:06:21.799 08:05:18 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:21.799 00:06:21.799 real 0m2.722s 00:06:21.799 user 0m2.361s 00:06:21.799 sys 0m0.374s 00:06:21.799 08:05:18 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.799 08:05:18 thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.799 ************************************ 00:06:21.799 END TEST thread 00:06:21.799 ************************************ 00:06:21.799 08:05:18 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:21.799 08:05:18 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:21.799 08:05:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.799 08:05:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.799 08:05:18 -- common/autotest_common.sh@10 -- # set +x 00:06:21.799 ************************************ 00:06:21.799 START TEST app_cmdline 00:06:21.799 ************************************ 00:06:21.799 08:05:18 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:21.799 * Looking for test storage... 00:06:21.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:21.799 08:05:18 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:21.799 08:05:18 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:21.799 08:05:18 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:21.799 08:05:18 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:21.800 08:05:18 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.800 08:05:18 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.800 08:05:18 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.800 08:05:18 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.800 08:05:18 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.800 08:05:18 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.800 08:05:18 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.800 08:05:18 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.800 08:05:18 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.800 08:05:18 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.800 08:05:18 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.800 08:05:18 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:21.800 08:05:18 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:21.800 08:05:18 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.800 08:05:18 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.800 08:05:18 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:21.800 08:05:18 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:21.800 08:05:18 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.800 08:05:18 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:21.800 08:05:18 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.800 08:05:18 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:21.800 08:05:18 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:21.800 08:05:18 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.800 08:05:18 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:21.800 08:05:18 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.800 08:05:18 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.800 08:05:18 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.800 08:05:18 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:21.800 08:05:18 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.800 08:05:18 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:21.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.800 --rc genhtml_branch_coverage=1 00:06:21.800 --rc genhtml_function_coverage=1 00:06:21.800 --rc genhtml_legend=1 00:06:21.800 --rc geninfo_all_blocks=1 00:06:21.800 --rc geninfo_unexecuted_blocks=1 00:06:21.800 00:06:21.800 ' 00:06:21.800 08:05:18 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:21.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.800 --rc genhtml_branch_coverage=1 00:06:21.800 --rc genhtml_function_coverage=1 00:06:21.800 --rc genhtml_legend=1 00:06:21.800 --rc geninfo_all_blocks=1 00:06:21.800 --rc geninfo_unexecuted_blocks=1 00:06:21.800 00:06:21.800 ' 00:06:21.800 08:05:18 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:21.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.800 --rc genhtml_branch_coverage=1 00:06:21.800 --rc genhtml_function_coverage=1 00:06:21.800 --rc genhtml_legend=1 00:06:21.800 --rc geninfo_all_blocks=1 00:06:21.800 --rc geninfo_unexecuted_blocks=1 00:06:21.800 00:06:21.800 ' 00:06:21.800 08:05:18 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:21.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.800 --rc genhtml_branch_coverage=1 00:06:21.800 --rc genhtml_function_coverage=1 00:06:21.800 --rc genhtml_legend=1 00:06:21.800 --rc geninfo_all_blocks=1 00:06:21.800 --rc geninfo_unexecuted_blocks=1 00:06:21.800 00:06:21.800 ' 00:06:21.800 08:05:18 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:21.800 08:05:18 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1746388 00:06:21.800 08:05:18 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1746388 00:06:21.800 08:05:18 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1746388 ']' 00:06:21.800 08:05:18 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:21.800 08:05:18 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.800 08:05:18 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.800 08:05:18 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.800 08:05:18 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.800 08:05:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:21.800 [2024-11-28 08:05:19.052899] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:06:21.800 [2024-11-28 08:05:19.052975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1746388 ] 00:06:22.061 [2024-11-28 08:05:19.141891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.061 [2024-11-28 08:05:19.177529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.632 08:05:19 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.632 08:05:19 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:22.632 08:05:19 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:22.893 { 00:06:22.893 "version": "SPDK v25.01-pre git sha1 37db29af3", 00:06:22.893 "fields": { 00:06:22.893 "major": 25, 00:06:22.893 "minor": 1, 00:06:22.893 "patch": 0, 00:06:22.893 "suffix": "-pre", 00:06:22.893 "commit": "37db29af3" 00:06:22.893 } 00:06:22.893 } 00:06:22.893 08:05:19 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:22.893 08:05:19 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:22.893 08:05:19 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:22.893 08:05:19 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:22.893 08:05:19 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:22.893 08:05:19 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:22.893 08:05:19 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.893 08:05:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:22.893 08:05:19 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:22.893 08:05:20 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.893 08:05:20 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:22.893 08:05:20 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:22.894 08:05:20 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:22.894 08:05:20 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:22.894 08:05:20 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:22.894 08:05:20 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:22.894 08:05:20 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.894 08:05:20 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:22.894 08:05:20 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.894 08:05:20 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:22.894 08:05:20 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.894 08:05:20 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:22.894 08:05:20 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:22.894 08:05:20 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:23.154 request: 00:06:23.154 { 00:06:23.154 "method": "env_dpdk_get_mem_stats", 00:06:23.154 "req_id": 1 00:06:23.154 } 00:06:23.154 Got JSON-RPC error response 00:06:23.154 response: 00:06:23.154 { 00:06:23.154 "code": -32601, 00:06:23.154 "message": "Method not found" 00:06:23.154 } 00:06:23.154 08:05:20 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:23.154 08:05:20 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:23.154 08:05:20 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:23.154 08:05:20 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:23.154 08:05:20 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1746388 00:06:23.154 08:05:20 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1746388 ']' 00:06:23.154 08:05:20 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1746388 00:06:23.154 08:05:20 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:23.154 08:05:20 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.154 08:05:20 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1746388 00:06:23.154 08:05:20 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.154 08:05:20 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.154 08:05:20 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1746388' 00:06:23.154 killing process with pid 1746388 00:06:23.154 08:05:20 app_cmdline -- common/autotest_common.sh@973 -- # kill 1746388 00:06:23.154 08:05:20 app_cmdline -- common/autotest_common.sh@978 -- # wait 1746388 00:06:23.415 00:06:23.415 real 0m1.691s 00:06:23.415 user 0m1.998s 00:06:23.415 sys 0m0.465s 00:06:23.415 08:05:20 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.415 08:05:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:23.415 ************************************ 00:06:23.415 END TEST app_cmdline 00:06:23.415 ************************************ 00:06:23.415 08:05:20 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:23.415 08:05:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.415 08:05:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.415 08:05:20 -- common/autotest_common.sh@10 -- # set +x 00:06:23.415 ************************************ 00:06:23.415 START TEST version 00:06:23.415 ************************************ 00:06:23.415 08:05:20 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:23.415 * Looking for test storage... 00:06:23.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:23.415 08:05:20 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:23.415 08:05:20 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:23.415 08:05:20 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:23.676 08:05:20 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:23.676 08:05:20 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.676 08:05:20 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.676 08:05:20 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.676 08:05:20 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.676 08:05:20 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.676 08:05:20 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.676 08:05:20 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.676 08:05:20 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.676 08:05:20 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.676 08:05:20 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.676 08:05:20 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.676 08:05:20 version -- scripts/common.sh@344 -- # case "$op" in 00:06:23.676 08:05:20 version -- scripts/common.sh@345 -- # : 1 00:06:23.676 08:05:20 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.676 08:05:20 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.676 08:05:20 version -- scripts/common.sh@365 -- # decimal 1 00:06:23.676 08:05:20 version -- scripts/common.sh@353 -- # local d=1 00:06:23.676 08:05:20 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.676 08:05:20 version -- scripts/common.sh@355 -- # echo 1 00:06:23.676 08:05:20 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.676 08:05:20 version -- scripts/common.sh@366 -- # decimal 2 00:06:23.676 08:05:20 version -- scripts/common.sh@353 -- # local d=2 00:06:23.676 08:05:20 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.676 08:05:20 version -- scripts/common.sh@355 -- # echo 2 00:06:23.676 08:05:20 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.676 08:05:20 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.676 08:05:20 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.676 08:05:20 version -- scripts/common.sh@368 -- # return 0 00:06:23.676 08:05:20 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.676 08:05:20 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:23.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.676 --rc genhtml_branch_coverage=1 00:06:23.676 --rc genhtml_function_coverage=1 00:06:23.676 --rc genhtml_legend=1 00:06:23.676 --rc geninfo_all_blocks=1 00:06:23.676 --rc geninfo_unexecuted_blocks=1 00:06:23.676 00:06:23.676 ' 00:06:23.676 08:05:20 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:23.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.676 --rc genhtml_branch_coverage=1 00:06:23.676 --rc genhtml_function_coverage=1 00:06:23.676 --rc genhtml_legend=1 00:06:23.676 --rc geninfo_all_blocks=1 00:06:23.676 --rc geninfo_unexecuted_blocks=1 00:06:23.676 00:06:23.676 ' 00:06:23.676 08:05:20 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:23.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.676 --rc genhtml_branch_coverage=1 00:06:23.676 --rc genhtml_function_coverage=1 00:06:23.676 --rc genhtml_legend=1 00:06:23.676 --rc geninfo_all_blocks=1 00:06:23.676 --rc geninfo_unexecuted_blocks=1 00:06:23.676 00:06:23.676 ' 00:06:23.676 08:05:20 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:23.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.676 --rc genhtml_branch_coverage=1 00:06:23.676 --rc genhtml_function_coverage=1 00:06:23.676 --rc genhtml_legend=1 00:06:23.676 --rc geninfo_all_blocks=1 00:06:23.676 --rc geninfo_unexecuted_blocks=1 00:06:23.676 00:06:23.676 ' 00:06:23.676 08:05:20 version -- app/version.sh@17 -- # get_header_version major 00:06:23.676 08:05:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:23.676 08:05:20 version -- app/version.sh@14 -- # cut -f2 00:06:23.676 08:05:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.676 08:05:20 version -- app/version.sh@17 -- # major=25 00:06:23.676 08:05:20 version -- app/version.sh@18 -- # get_header_version minor 00:06:23.676 08:05:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:23.676 08:05:20 version -- app/version.sh@14 -- # cut -f2 00:06:23.676 08:05:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.676 08:05:20 version -- app/version.sh@18 -- # minor=1 00:06:23.676 08:05:20 version -- app/version.sh@19 -- # get_header_version patch 00:06:23.676 08:05:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:23.676 08:05:20 version -- app/version.sh@14 -- # cut -f2 00:06:23.676 08:05:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.676 08:05:20 version -- app/version.sh@19 -- # patch=0 00:06:23.676 08:05:20 version -- app/version.sh@20 -- # get_header_version suffix 00:06:23.676 08:05:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:23.676 08:05:20 version -- app/version.sh@14 -- # cut -f2 00:06:23.676 08:05:20 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.676 08:05:20 version -- app/version.sh@20 -- # suffix=-pre 00:06:23.676 08:05:20 version -- app/version.sh@22 -- # version=25.1 00:06:23.676 08:05:20 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:23.676 08:05:20 version -- app/version.sh@28 -- # version=25.1rc0 00:06:23.676 08:05:20 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:23.676 08:05:20 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:23.676 08:05:20 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:23.676 08:05:20 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:23.676 00:06:23.676 real 0m0.284s 00:06:23.676 user 0m0.179s 00:06:23.677 sys 0m0.153s 00:06:23.677 08:05:20 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.677 08:05:20 version -- common/autotest_common.sh@10 -- # set +x 00:06:23.677 ************************************ 00:06:23.677 END TEST version 00:06:23.677 ************************************ 00:06:23.677 08:05:20 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:23.677 08:05:20 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:23.677 08:05:20 -- spdk/autotest.sh@194 -- # uname -s 00:06:23.677 08:05:20 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:23.677 08:05:20 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:23.677 08:05:20 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:23.677 08:05:20 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:23.677 08:05:20 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:23.677 08:05:20 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:23.677 08:05:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:23.677 08:05:20 -- common/autotest_common.sh@10 -- # set +x 00:06:23.677 08:05:20 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:23.677 08:05:20 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:23.677 08:05:20 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:23.677 08:05:20 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:23.677 08:05:20 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:23.677 08:05:20 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:23.677 08:05:20 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:23.677 08:05:20 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:23.677 08:05:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.677 08:05:20 -- common/autotest_common.sh@10 -- # set +x 00:06:23.937 ************************************ 00:06:23.937 START TEST nvmf_tcp 00:06:23.937 ************************************ 00:06:23.937 08:05:20 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:23.937 * Looking for test storage... 00:06:23.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:23.937 08:05:21 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:23.937 08:05:21 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:23.937 08:05:21 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:23.937 08:05:21 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:23.937 08:05:21 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.937 08:05:21 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.937 08:05:21 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.937 08:05:21 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.937 08:05:21 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.937 08:05:21 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.937 08:05:21 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.937 08:05:21 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.937 08:05:21 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.937 08:05:21 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.937 08:05:21 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.937 08:05:21 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:23.937 08:05:21 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:23.937 08:05:21 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.937 08:05:21 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.937 08:05:21 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:23.937 08:05:21 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:23.937 08:05:21 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.937 08:05:21 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:23.937 08:05:21 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.937 08:05:21 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:23.937 08:05:21 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:23.937 08:05:21 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.937 08:05:21 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:23.937 08:05:21 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.937 08:05:21 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.937 08:05:21 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.937 08:05:21 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:23.937 08:05:21 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.937 08:05:21 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:23.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.937 --rc genhtml_branch_coverage=1 00:06:23.937 --rc genhtml_function_coverage=1 00:06:23.937 --rc genhtml_legend=1 00:06:23.937 --rc geninfo_all_blocks=1 00:06:23.937 --rc geninfo_unexecuted_blocks=1 00:06:23.937 00:06:23.937 ' 00:06:23.937 08:05:21 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:23.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.937 --rc genhtml_branch_coverage=1 00:06:23.937 --rc genhtml_function_coverage=1 00:06:23.937 --rc genhtml_legend=1 00:06:23.937 --rc geninfo_all_blocks=1 00:06:23.937 --rc geninfo_unexecuted_blocks=1 00:06:23.937 00:06:23.937 ' 00:06:23.937 08:05:21 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:23.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.937 --rc genhtml_branch_coverage=1 00:06:23.937 --rc genhtml_function_coverage=1 00:06:23.937 --rc genhtml_legend=1 00:06:23.937 --rc geninfo_all_blocks=1 00:06:23.937 --rc geninfo_unexecuted_blocks=1 00:06:23.937 00:06:23.937 ' 00:06:23.937 08:05:21 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:23.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.937 --rc genhtml_branch_coverage=1 00:06:23.937 --rc genhtml_function_coverage=1 00:06:23.937 --rc genhtml_legend=1 00:06:23.937 --rc geninfo_all_blocks=1 00:06:23.937 --rc geninfo_unexecuted_blocks=1 00:06:23.937 00:06:23.937 ' 00:06:23.937 08:05:21 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:23.937 08:05:21 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:23.937 08:05:21 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:23.937 08:05:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:23.937 08:05:21 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.937 08:05:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:23.937 ************************************ 00:06:23.937 START TEST nvmf_target_core 00:06:23.937 ************************************ 00:06:23.938 08:05:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:24.199 * Looking for test storage... 00:06:24.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:24.199 08:05:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:24.199 08:05:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:06:24.199 08:05:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:24.199 08:05:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:24.199 08:05:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.199 08:05:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.199 08:05:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.199 08:05:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.199 08:05:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.199 08:05:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.199 08:05:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.199 08:05:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.199 08:05:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.199 08:05:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.199 08:05:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.199 08:05:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:24.199 08:05:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:24.199 08:05:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.199 08:05:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.199 08:05:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:24.199 08:05:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:24.199 08:05:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.199 08:05:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:24.199 08:05:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.199 08:05:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:24.199 08:05:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:24.199 08:05:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.199 08:05:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:24.199 08:05:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.199 08:05:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.199 08:05:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.199 08:05:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:24.199 08:05:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.199 08:05:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:24.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.199 --rc genhtml_branch_coverage=1 00:06:24.200 --rc genhtml_function_coverage=1 00:06:24.200 --rc genhtml_legend=1 00:06:24.200 --rc geninfo_all_blocks=1 00:06:24.200 --rc geninfo_unexecuted_blocks=1 00:06:24.200 00:06:24.200 ' 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:24.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.200 --rc genhtml_branch_coverage=1 00:06:24.200 --rc genhtml_function_coverage=1 00:06:24.200 --rc genhtml_legend=1 00:06:24.200 --rc geninfo_all_blocks=1 00:06:24.200 --rc geninfo_unexecuted_blocks=1 00:06:24.200 00:06:24.200 ' 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:24.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.200 --rc genhtml_branch_coverage=1 00:06:24.200 --rc genhtml_function_coverage=1 00:06:24.200 --rc genhtml_legend=1 00:06:24.200 --rc geninfo_all_blocks=1 00:06:24.200 --rc geninfo_unexecuted_blocks=1 00:06:24.200 00:06:24.200 ' 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:24.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.200 --rc genhtml_branch_coverage=1 00:06:24.200 --rc genhtml_function_coverage=1 00:06:24.200 --rc genhtml_legend=1 00:06:24.200 --rc geninfo_all_blocks=1 00:06:24.200 --rc geninfo_unexecuted_blocks=1 00:06:24.200 00:06:24.200 ' 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:24.200 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.200 08:05:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:24.462 ************************************ 00:06:24.462 START TEST nvmf_abort 00:06:24.462 ************************************ 00:06:24.462 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:24.462 * Looking for test storage... 00:06:24.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:24.462 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:24.462 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:06:24.462 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:24.462 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:24.462 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.462 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.462 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.462 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.462 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.462 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.462 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.462 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.462 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.462 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.462 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.462 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:24.462 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:24.462 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.462 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.462 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:24.462 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:24.462 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.462 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:24.462 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.462 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:24.462 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:24.462 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.462 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:24.462 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:24.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.463 --rc genhtml_branch_coverage=1 00:06:24.463 --rc genhtml_function_coverage=1 00:06:24.463 --rc genhtml_legend=1 00:06:24.463 --rc geninfo_all_blocks=1 00:06:24.463 --rc geninfo_unexecuted_blocks=1 00:06:24.463 00:06:24.463 ' 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:24.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.463 --rc genhtml_branch_coverage=1 00:06:24.463 --rc genhtml_function_coverage=1 00:06:24.463 --rc genhtml_legend=1 00:06:24.463 --rc geninfo_all_blocks=1 00:06:24.463 --rc geninfo_unexecuted_blocks=1 00:06:24.463 00:06:24.463 ' 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:24.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.463 --rc genhtml_branch_coverage=1 00:06:24.463 --rc genhtml_function_coverage=1 00:06:24.463 --rc genhtml_legend=1 00:06:24.463 --rc geninfo_all_blocks=1 00:06:24.463 --rc geninfo_unexecuted_blocks=1 00:06:24.463 00:06:24.463 ' 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:24.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.463 --rc genhtml_branch_coverage=1 00:06:24.463 --rc genhtml_function_coverage=1 00:06:24.463 --rc genhtml_legend=1 00:06:24.463 --rc geninfo_all_blocks=1 00:06:24.463 --rc geninfo_unexecuted_blocks=1 00:06:24.463 00:06:24.463 ' 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:24.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:24.463 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:24.464 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:24.464 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:24.464 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:24.464 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:32.625 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:32.625 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:32.626 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:32.626 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:32.626 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:32.626 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:32.626 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:32.626 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:32.626 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:32.626 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:32.626 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:32.626 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:32.626 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:32.626 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:32.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:32.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:06:32.626 00:06:32.626 --- 10.0.0.2 ping statistics --- 00:06:32.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:32.626 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:06:32.626 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:32.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:32.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:06:32.626 00:06:32.626 --- 10.0.0.1 ping statistics --- 00:06:32.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:32.626 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:06:32.626 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:32.626 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:32.626 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:32.626 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:32.626 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:32.626 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:32.626 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:32.626 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:32.626 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:32.626 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:32.626 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:32.626 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:32.626 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.626 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1750875 00:06:32.626 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:32.626 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1750875 00:06:32.626 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1750875 ']' 00:06:32.626 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.626 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.626 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.626 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.626 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:32.626 [2024-11-28 08:05:29.321566] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:06:32.626 [2024-11-28 08:05:29.321630] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:32.626 [2024-11-28 08:05:29.422509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:32.626 [2024-11-28 08:05:29.475745] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:32.626 [2024-11-28 08:05:29.475802] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:32.626 [2024-11-28 08:05:29.475810] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:32.626 [2024-11-28 08:05:29.475818] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:32.626 [2024-11-28 08:05:29.475825] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:32.626 [2024-11-28 08:05:29.477717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.626 [2024-11-28 08:05:29.477855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.626 [2024-11-28 08:05:29.477857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:32.887 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.887 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:32.887 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:32.887 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:32.887 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:33.148 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:33.148 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:33.148 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.148 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:33.148 [2024-11-28 08:05:30.186780] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:33.148 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.148 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:33.148 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.148 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:33.148 Malloc0 00:06:33.148 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.148 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:33.148 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.148 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:33.148 Delay0 00:06:33.148 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.148 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:33.148 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.148 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:33.148 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.148 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:33.148 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.148 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:33.148 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.148 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:33.148 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.148 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:33.148 [2024-11-28 08:05:30.274549] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:33.148 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.148 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:33.148 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.148 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:33.148 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.148 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:33.409 [2024-11-28 08:05:30.465348] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:35.323 Initializing NVMe Controllers 00:06:35.323 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:35.323 controller IO queue size 128 less than required 00:06:35.323 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:35.323 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:35.323 Initialization complete. Launching workers. 00:06:35.323 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 125, failed: 27440 00:06:35.323 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27503, failed to submit 62 00:06:35.323 success 27444, unsuccessful 59, failed 0 00:06:35.323 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:35.323 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.323 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:35.323 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.323 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:35.323 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:35.323 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:35.323 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:35.323 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:35.323 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:35.323 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:35.323 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:35.323 rmmod nvme_tcp 00:06:35.323 rmmod nvme_fabrics 00:06:35.323 rmmod nvme_keyring 00:06:35.323 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:35.323 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:35.323 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:35.323 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1750875 ']' 00:06:35.323 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1750875 00:06:35.323 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1750875 ']' 00:06:35.323 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1750875 00:06:35.323 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:35.323 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.323 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1750875 00:06:35.583 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:35.583 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:35.583 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1750875' 00:06:35.583 killing process with pid 1750875 00:06:35.583 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1750875 00:06:35.583 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1750875 00:06:35.583 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:35.583 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:35.583 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:35.583 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:35.583 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:35.583 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:35.583 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:35.583 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:35.583 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:35.583 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:35.583 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:35.583 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.124 08:05:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:38.124 00:06:38.124 real 0m13.338s 00:06:38.124 user 0m13.706s 00:06:38.124 sys 0m6.747s 00:06:38.124 08:05:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.124 08:05:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:38.124 ************************************ 00:06:38.124 END TEST nvmf_abort 00:06:38.124 ************************************ 00:06:38.124 08:05:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:38.124 08:05:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:38.124 08:05:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.124 08:05:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:38.124 ************************************ 00:06:38.124 START TEST nvmf_ns_hotplug_stress 00:06:38.124 ************************************ 00:06:38.124 08:05:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:38.124 * Looking for test storage... 00:06:38.124 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:38.124 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:38.124 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:06:38.124 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:38.124 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:38.124 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.124 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.124 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.124 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.124 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.124 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.124 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.124 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.124 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.124 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.124 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.124 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:38.124 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:38.124 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.124 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.124 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:38.124 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:38.124 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.124 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:38.124 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.124 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:38.124 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:38.124 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.124 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:38.124 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.124 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.124 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.124 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:38.124 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.124 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:38.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.124 --rc genhtml_branch_coverage=1 00:06:38.124 --rc genhtml_function_coverage=1 00:06:38.124 --rc genhtml_legend=1 00:06:38.124 --rc geninfo_all_blocks=1 00:06:38.124 --rc geninfo_unexecuted_blocks=1 00:06:38.124 00:06:38.124 ' 00:06:38.124 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:38.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.124 --rc genhtml_branch_coverage=1 00:06:38.124 --rc genhtml_function_coverage=1 00:06:38.124 --rc genhtml_legend=1 00:06:38.124 --rc geninfo_all_blocks=1 00:06:38.124 --rc geninfo_unexecuted_blocks=1 00:06:38.124 00:06:38.124 ' 00:06:38.124 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:38.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.124 --rc genhtml_branch_coverage=1 00:06:38.124 --rc genhtml_function_coverage=1 00:06:38.124 --rc genhtml_legend=1 00:06:38.124 --rc geninfo_all_blocks=1 00:06:38.124 --rc geninfo_unexecuted_blocks=1 00:06:38.125 00:06:38.125 ' 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:38.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.125 --rc genhtml_branch_coverage=1 00:06:38.125 --rc genhtml_function_coverage=1 00:06:38.125 --rc genhtml_legend=1 00:06:38.125 --rc geninfo_all_blocks=1 00:06:38.125 --rc geninfo_unexecuted_blocks=1 00:06:38.125 00:06:38.125 ' 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:38.125 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:38.125 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:38.126 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:38.126 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:38.126 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:38.126 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:38.126 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.126 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:38.126 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:38.126 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:38.126 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:46.271 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:46.271 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:46.271 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:46.271 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:46.271 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:46.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:46.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:06:46.272 00:06:46.272 --- 10.0.0.2 ping statistics --- 00:06:46.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:46.272 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:46.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:46.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:06:46.272 00:06:46.272 --- 10.0.0.1 ping statistics --- 00:06:46.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:46.272 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1755916 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1755916 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1755916 ']' 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.272 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:46.272 [2024-11-28 08:05:42.741147] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:06:46.272 [2024-11-28 08:05:42.741254] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:46.272 [2024-11-28 08:05:42.841630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:46.272 [2024-11-28 08:05:42.892620] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:46.272 [2024-11-28 08:05:42.892672] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:46.272 [2024-11-28 08:05:42.892681] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:46.272 [2024-11-28 08:05:42.892689] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:46.272 [2024-11-28 08:05:42.892695] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:46.272 [2024-11-28 08:05:42.894533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.272 [2024-11-28 08:05:42.894694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.272 [2024-11-28 08:05:42.894695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.535 08:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.535 08:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:46.535 08:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:46.535 08:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:46.535 08:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:46.535 08:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:46.535 08:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:46.535 08:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:46.535 [2024-11-28 08:05:43.776057] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:46.535 08:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:46.796 08:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:47.057 [2024-11-28 08:05:44.179213] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:47.057 08:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:47.318 08:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:47.318 Malloc0 00:06:47.578 08:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:47.578 Delay0 00:06:47.579 08:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.838 08:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:48.096 NULL1 00:06:48.096 08:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:48.356 08:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1756295 00:06:48.356 08:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:48.356 08:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:06:48.356 08:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.356 08:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.616 08:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:48.616 08:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:48.877 true 00:06:48.877 08:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:06:48.877 08:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.877 08:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.136 08:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:49.136 08:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:49.396 true 00:06:49.396 08:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:06:49.396 08:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.657 08:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.657 08:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:49.657 08:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:49.917 true 00:06:49.917 08:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:06:49.917 08:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.178 08:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.178 08:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:50.178 08:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:50.438 true 00:06:50.438 08:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:06:50.438 08:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.699 08:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.960 08:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:50.960 08:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:50.960 true 00:06:50.960 08:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:06:50.960 08:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.220 08:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.481 08:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:51.481 08:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:51.481 true 00:06:51.481 08:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:06:51.481 08:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.741 08:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.001 08:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:52.001 08:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:52.001 true 00:06:52.262 08:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:06:52.262 08:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.262 08:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.522 08:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:52.522 08:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:52.784 true 00:06:52.784 08:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:06:52.784 08:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.784 08:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.044 08:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:53.044 08:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:53.305 true 00:06:53.305 08:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:06:53.305 08:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.305 08:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.567 08:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:53.567 08:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:53.828 true 00:06:53.828 08:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:06:53.828 08:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.088 08:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.088 08:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:54.088 08:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:54.348 true 00:06:54.348 08:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:06:54.348 08:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.610 08:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.610 08:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:54.610 08:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:54.870 true 00:06:54.870 08:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:06:54.870 08:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.132 08:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.132 08:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:55.132 08:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:55.393 true 00:06:55.393 08:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:06:55.393 08:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.655 08:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.916 08:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:55.916 08:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:55.916 true 00:06:55.916 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:06:55.916 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.177 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.439 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:56.439 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:56.439 true 00:06:56.439 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:06:56.439 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.700 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.962 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:56.962 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:56.962 true 00:06:56.962 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:06:56.962 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.223 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.484 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:57.484 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:57.484 true 00:06:57.744 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:06:57.744 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.744 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.004 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:58.004 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:58.263 true 00:06:58.263 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:06:58.263 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.263 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.524 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:58.524 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:58.785 true 00:06:58.785 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:06:58.785 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.785 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.046 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:59.046 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:59.307 true 00:06:59.307 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:06:59.307 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.307 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.569 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:59.569 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:59.830 true 00:06:59.830 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:06:59.830 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.091 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.091 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:00.091 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:00.351 true 00:07:00.351 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:07:00.351 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.612 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.874 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:00.874 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:00.874 true 00:07:00.874 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:07:00.874 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.135 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.395 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:01.395 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:01.395 true 00:07:01.395 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:07:01.395 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.656 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.916 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:01.916 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:01.916 true 00:07:02.177 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:07:02.177 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.177 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.437 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:02.437 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:02.697 true 00:07:02.697 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:07:02.697 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.697 08:05:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.958 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:02.958 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:03.218 true 00:07:03.218 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:07:03.218 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.480 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.480 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:03.480 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:03.741 true 00:07:03.741 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:07:03.741 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.002 08:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.002 08:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:04.002 08:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:04.264 true 00:07:04.264 08:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:07:04.264 08:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.525 08:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.786 08:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:04.786 08:06:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:04.786 true 00:07:04.786 08:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:07:04.786 08:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.047 08:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.308 08:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:05.308 08:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:05.308 true 00:07:05.308 08:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:07:05.308 08:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.569 08:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.831 08:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:05.831 08:06:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:05.831 true 00:07:05.831 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:07:05.831 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.090 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.351 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:07:06.351 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:06.351 true 00:07:06.351 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:07:06.351 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.611 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.873 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:07:06.873 08:06:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:07:06.873 true 00:07:07.135 08:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:07:07.135 08:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.135 08:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.396 08:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:07:07.396 08:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:07:07.656 true 00:07:07.656 08:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:07:07.656 08:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.656 08:06:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.916 08:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:07:07.917 08:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:07:08.176 true 00:07:08.177 08:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:07:08.177 08:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.177 08:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.437 08:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:07:08.437 08:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:07:08.702 true 00:07:08.702 08:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:07:08.702 08:06:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.963 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.963 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:07:08.963 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:07:09.224 true 00:07:09.224 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:07:09.224 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.485 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.485 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:07:09.485 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:07:09.746 true 00:07:09.746 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:07:09.746 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.009 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.009 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:07:10.009 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:07:10.271 true 00:07:10.271 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:07:10.271 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.533 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.533 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:07:10.533 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:07:10.794 true 00:07:10.794 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:07:10.794 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.056 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.317 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:11.317 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:11.317 true 00:07:11.317 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:07:11.317 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.577 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.838 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:11.838 08:06:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:11.838 true 00:07:11.838 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:07:11.838 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.100 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.361 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:12.361 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:12.361 true 00:07:12.361 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:07:12.361 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.622 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.883 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:12.883 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:13.143 true 00:07:13.143 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:07:13.143 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.143 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.404 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:13.404 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:13.665 true 00:07:13.665 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:07:13.665 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.665 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.927 08:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:07:13.927 08:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:07:14.189 true 00:07:14.189 08:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:07:14.189 08:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.450 08:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.450 08:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:07:14.450 08:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:07:14.711 true 00:07:14.711 08:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:07:14.711 08:06:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.973 08:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.973 08:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:07:14.973 08:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:07:15.236 true 00:07:15.236 08:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:07:15.236 08:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.498 08:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.759 08:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:07:15.759 08:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:07:15.759 true 00:07:15.759 08:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:07:15.759 08:06:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.021 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.283 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:07:16.283 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:07:16.283 true 00:07:16.283 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:07:16.283 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.544 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.807 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:07:16.807 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:07:16.807 true 00:07:17.068 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:07:17.068 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.068 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.329 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:07:17.329 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:07:17.589 true 00:07:17.589 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:07:17.589 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.589 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.850 08:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:07:17.850 08:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:07:18.111 true 00:07:18.111 08:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:07:18.111 08:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.111 08:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.372 08:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:07:18.372 08:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:07:18.635 true 00:07:18.635 08:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:07:18.635 08:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.635 Initializing NVMe Controllers 00:07:18.635 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:18.635 Controller IO queue size 128, less than required. 00:07:18.635 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:18.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:18.635 Initialization complete. Launching workers. 00:07:18.635 ======================================================== 00:07:18.635 Latency(us) 00:07:18.635 Device Information : IOPS MiB/s Average min max 00:07:18.635 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30559.16 14.92 4188.47 1135.56 8200.77 00:07:18.635 ======================================================== 00:07:18.635 Total : 30559.16 14.92 4188.47 1135.56 8200.77 00:07:18.635 00:07:18.635 08:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.896 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:07:18.896 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:07:19.157 true 00:07:19.157 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1756295 00:07:19.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1756295) - No such process 00:07:19.157 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1756295 00:07:19.157 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.157 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:19.418 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:19.418 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:19.418 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:19.418 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:19.418 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:19.679 null0 00:07:19.679 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:19.679 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:19.679 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:19.679 null1 00:07:19.940 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:19.940 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:19.940 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:19.940 null2 00:07:19.940 08:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:19.940 08:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:19.940 08:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:20.202 null3 00:07:20.202 08:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:20.202 08:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:20.202 08:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:20.464 null4 00:07:20.464 08:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:20.464 08:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:20.464 08:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:20.464 null5 00:07:20.464 08:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:20.464 08:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:20.464 08:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:20.725 null6 00:07:20.725 08:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:20.725 08:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:20.725 08:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:20.725 null7 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1763731 1763732 1763734 1763736 1763738 1763740 1763742 1763743 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.988 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:20.989 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:20.989 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.989 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:20.989 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:21.251 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:21.251 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:21.251 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:21.251 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:21.251 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.251 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.251 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:21.251 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.251 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.251 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:21.251 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.251 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.251 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:21.251 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.251 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.251 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:21.251 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.251 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.251 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:21.251 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.251 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.251 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:21.251 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.251 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.251 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:21.251 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.251 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.251 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:21.512 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:21.512 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:21.512 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:21.512 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:21.512 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:21.512 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:21.512 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.512 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:21.512 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.512 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.513 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:21.513 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.513 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.513 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:21.775 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.775 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.775 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:21.775 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.775 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.775 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:21.775 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.775 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.776 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:21.776 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.776 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.776 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:21.776 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.776 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.776 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.776 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:21.776 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.776 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:21.776 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:21.776 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:21.776 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:21.776 08:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:21.776 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.776 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:21.776 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:21.776 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:22.039 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.039 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.039 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:22.039 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.039 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.039 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:22.039 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.039 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.039 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:22.039 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.039 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.039 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:22.039 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.039 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.039 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:22.039 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.039 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.039 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:22.039 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.039 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.039 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:22.039 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.039 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.039 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:22.039 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:22.301 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:22.301 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.301 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:22.301 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:22.301 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:22.301 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:22.301 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:22.301 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.301 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.301 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:22.301 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.301 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.301 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:22.301 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.301 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.301 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:22.301 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.301 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.301 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:22.564 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.564 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.564 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:22.564 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.564 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.564 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:22.564 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.564 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.564 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:22.564 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:22.564 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.564 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.564 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:22.564 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:22.564 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.564 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:22.564 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:22.564 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.564 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.564 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:22.564 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:22.564 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:22.565 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:22.827 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.827 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.827 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:22.827 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:22.827 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.827 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.827 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:22.827 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.827 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.827 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.827 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:22.827 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.827 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:22.827 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.827 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.827 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:22.827 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.827 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.827 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:22.827 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.827 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.827 08:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:22.827 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:23.088 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.088 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.088 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:23.088 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:23.089 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:23.089 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:23.089 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:23.089 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.089 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:23.089 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.089 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.089 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:23.089 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.089 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.089 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:23.089 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:23.089 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.089 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.089 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:23.089 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.089 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.089 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:23.089 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.089 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.089 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:23.089 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.089 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.089 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:23.350 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:23.350 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.350 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.350 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:23.350 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:23.350 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:23.350 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:23.350 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.350 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.350 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:23.350 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:23.350 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.350 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:23.350 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.350 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.350 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:23.611 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.611 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.611 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:23.611 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.611 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.611 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:23.611 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:23.611 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.611 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.611 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:23.611 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.611 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.611 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:23.611 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.611 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.611 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:23.611 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.611 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.611 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:23.611 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:23.611 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:23.611 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:23.611 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.611 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.611 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:23.873 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.873 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:23.873 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:23.873 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.873 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.873 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:23.873 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:23.873 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.873 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.873 08:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:23.873 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:23.873 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.873 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.873 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:23.873 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.873 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.873 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:23.873 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.873 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.873 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:23.873 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:24.135 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.135 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.135 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:24.135 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:24.135 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.135 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.135 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:24.135 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.135 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.135 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:24.135 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:24.135 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.135 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:24.135 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.135 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.135 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:24.135 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:24.135 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:24.135 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.135 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.135 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:24.135 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:24.396 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.396 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.396 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:24.396 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.396 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.396 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:24.396 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.396 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.396 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:24.396 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:24.396 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.396 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.396 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.396 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.396 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:24.396 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.396 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.396 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:24.396 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:24.396 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:24.396 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:24.658 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.658 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.658 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.658 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:24.658 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:24.658 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.658 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.658 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.658 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.658 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.658 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.658 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.658 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.658 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.658 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.658 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.658 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.658 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:24.658 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:24.658 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:24.658 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:24.658 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:24.658 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:24.658 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:24.658 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:24.923 rmmod nvme_tcp 00:07:24.923 rmmod nvme_fabrics 00:07:24.923 rmmod nvme_keyring 00:07:24.923 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:24.923 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:24.923 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:24.923 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1755916 ']' 00:07:24.923 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1755916 00:07:24.923 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1755916 ']' 00:07:24.923 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1755916 00:07:24.923 08:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:24.923 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.923 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1755916 00:07:24.923 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:24.923 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:24.923 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1755916' 00:07:24.923 killing process with pid 1755916 00:07:24.923 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1755916 00:07:24.923 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1755916 00:07:24.923 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:24.923 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:24.923 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:24.923 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:24.923 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:24.923 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:24.923 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:24.923 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:24.923 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:24.923 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.923 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:24.923 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.589 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:27.589 00:07:27.589 real 0m49.331s 00:07:27.589 user 3m20.618s 00:07:27.589 sys 0m17.640s 00:07:27.589 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.589 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:27.589 ************************************ 00:07:27.589 END TEST nvmf_ns_hotplug_stress 00:07:27.589 ************************************ 00:07:27.589 08:06:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:27.589 08:06:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:27.589 08:06:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.589 08:06:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:27.589 ************************************ 00:07:27.590 START TEST nvmf_delete_subsystem 00:07:27.590 ************************************ 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:27.590 * Looking for test storage... 00:07:27.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:27.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.590 --rc genhtml_branch_coverage=1 00:07:27.590 --rc genhtml_function_coverage=1 00:07:27.590 --rc genhtml_legend=1 00:07:27.590 --rc geninfo_all_blocks=1 00:07:27.590 --rc geninfo_unexecuted_blocks=1 00:07:27.590 00:07:27.590 ' 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:27.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.590 --rc genhtml_branch_coverage=1 00:07:27.590 --rc genhtml_function_coverage=1 00:07:27.590 --rc genhtml_legend=1 00:07:27.590 --rc geninfo_all_blocks=1 00:07:27.590 --rc geninfo_unexecuted_blocks=1 00:07:27.590 00:07:27.590 ' 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:27.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.590 --rc genhtml_branch_coverage=1 00:07:27.590 --rc genhtml_function_coverage=1 00:07:27.590 --rc genhtml_legend=1 00:07:27.590 --rc geninfo_all_blocks=1 00:07:27.590 --rc geninfo_unexecuted_blocks=1 00:07:27.590 00:07:27.590 ' 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:27.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.590 --rc genhtml_branch_coverage=1 00:07:27.590 --rc genhtml_function_coverage=1 00:07:27.590 --rc genhtml_legend=1 00:07:27.590 --rc geninfo_all_blocks=1 00:07:27.590 --rc geninfo_unexecuted_blocks=1 00:07:27.590 00:07:27.590 ' 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:27.590 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:27.591 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:27.591 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:27.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:27.591 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:27.591 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:27.591 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:27.591 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:27.591 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:27.591 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:27.591 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:27.591 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:27.591 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:27.591 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.591 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:27.591 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.591 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:27.591 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:27.591 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:27.591 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.735 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:35.735 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:35.735 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:35.735 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:35.735 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:35.735 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:35.735 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:35.735 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:35.735 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:35.735 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:35.735 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:35.735 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:35.735 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:35.735 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:35.736 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:35.736 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:35.736 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:35.736 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:35.736 08:06:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:35.736 08:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:35.736 08:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:35.736 08:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:35.736 08:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:35.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:35.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.560 ms 00:07:35.736 00:07:35.736 --- 10.0.0.2 ping statistics --- 00:07:35.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.736 rtt min/avg/max/mdev = 0.560/0.560/0.560/0.000 ms 00:07:35.736 08:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:35.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:35.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:07:35.736 00:07:35.736 --- 10.0.0.1 ping statistics --- 00:07:35.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.736 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:07:35.736 08:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:35.736 08:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:35.736 08:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:35.736 08:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:35.736 08:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:35.736 08:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:35.736 08:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:35.736 08:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:35.736 08:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:35.736 08:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:35.736 08:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:35.736 08:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:35.736 08:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.736 08:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1768936 00:07:35.736 08:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1768936 00:07:35.736 08:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:35.736 08:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1768936 ']' 00:07:35.736 08:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.736 08:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.736 08:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.736 08:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.736 08:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.736 [2024-11-28 08:06:32.153422] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:07:35.736 [2024-11-28 08:06:32.153491] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.736 [2024-11-28 08:06:32.253789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:35.736 [2024-11-28 08:06:32.306331] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:35.736 [2024-11-28 08:06:32.306383] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:35.736 [2024-11-28 08:06:32.306397] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:35.736 [2024-11-28 08:06:32.306405] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:35.736 [2024-11-28 08:06:32.306410] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:35.736 [2024-11-28 08:06:32.308002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.736 [2024-11-28 08:06:32.308006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.736 08:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.736 08:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:35.736 08:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:35.736 08:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:35.736 08:06:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.998 08:06:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:35.998 08:06:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:35.998 08:06:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.998 08:06:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.998 [2024-11-28 08:06:33.032178] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.998 08:06:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.998 08:06:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:35.998 08:06:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.998 08:06:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.998 08:06:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.998 08:06:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:35.998 08:06:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.998 08:06:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.998 [2024-11-28 08:06:33.056465] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.998 08:06:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.998 08:06:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:35.998 08:06:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.998 08:06:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.998 NULL1 00:07:35.998 08:06:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.998 08:06:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:35.998 08:06:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.998 08:06:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.998 Delay0 00:07:35.998 08:06:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.998 08:06:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.998 08:06:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.998 08:06:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.998 08:06:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.998 08:06:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1769045 00:07:35.998 08:06:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:35.998 08:06:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:35.998 [2024-11-28 08:06:33.183567] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:37.917 08:06:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:37.917 08:06:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.917 08:06:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 starting I/O failed: -6 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 starting I/O failed: -6 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 starting I/O failed: -6 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 starting I/O failed: -6 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 starting I/O failed: -6 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 starting I/O failed: -6 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 starting I/O failed: -6 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 starting I/O failed: -6 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 starting I/O failed: -6 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 starting I/O failed: -6 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 starting I/O failed: -6 00:07:38.178 [2024-11-28 08:06:35.309085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cfc680 is same with the state(6) to be set 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 [2024-11-28 08:06:35.311119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cfc2c0 is same with the state(6) to be set 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 starting I/O failed: -6 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 starting I/O failed: -6 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 starting I/O failed: -6 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 starting I/O failed: -6 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 starting I/O failed: -6 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 starting I/O failed: -6 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 starting I/O failed: -6 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 starting I/O failed: -6 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 starting I/O failed: -6 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 starting I/O failed: -6 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 starting I/O failed: -6 00:07:38.178 [2024-11-28 08:06:35.314348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2d8000d490 is same with the state(6) to be set 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Write completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.178 Read completed with error (sct=0, sc=8) 00:07:38.179 Read completed with error (sct=0, sc=8) 00:07:38.179 Read completed with error (sct=0, sc=8) 00:07:38.179 Write completed with error (sct=0, sc=8) 00:07:38.179 Read completed with error (sct=0, sc=8) 00:07:38.179 Read completed with error (sct=0, sc=8) 00:07:38.179 Write completed with error (sct=0, sc=8) 00:07:38.179 Read completed with error (sct=0, sc=8) 00:07:38.179 Read completed with error (sct=0, sc=8) 00:07:38.179 Read completed with error (sct=0, sc=8) 00:07:38.179 Write completed with error (sct=0, sc=8) 00:07:38.179 Read completed with error (sct=0, sc=8) 00:07:38.179 Read completed with error (sct=0, sc=8) 00:07:38.179 Read completed with error (sct=0, sc=8) 00:07:38.179 Write completed with error (sct=0, sc=8) 00:07:38.179 Read completed with error (sct=0, sc=8) 00:07:38.179 Read completed with error (sct=0, sc=8) 00:07:38.179 Read completed with error (sct=0, sc=8) 00:07:38.179 Read completed with error (sct=0, sc=8) 00:07:38.179 Write completed with error (sct=0, sc=8) 00:07:38.179 Read completed with error (sct=0, sc=8) 00:07:38.179 Read completed with error (sct=0, sc=8) 00:07:38.179 Read completed with error (sct=0, sc=8) 00:07:38.179 Write completed with error (sct=0, sc=8) 00:07:38.179 Read completed with error (sct=0, sc=8) 00:07:38.179 Read completed with error (sct=0, sc=8) 00:07:38.179 Read completed with error (sct=0, sc=8) 00:07:38.179 Write completed with error (sct=0, sc=8) 00:07:38.179 Write completed with error (sct=0, sc=8) 00:07:39.122 [2024-11-28 08:06:36.281620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cfd9b0 is same with the state(6) to be set 00:07:39.122 Write completed with error (sct=0, sc=8) 00:07:39.122 Write completed with error (sct=0, sc=8) 00:07:39.122 Read completed with error (sct=0, sc=8) 00:07:39.122 Read completed with error (sct=0, sc=8) 00:07:39.123 Write completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Write completed with error (sct=0, sc=8) 00:07:39.123 Write completed with error (sct=0, sc=8) 00:07:39.123 Write completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Write completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Write completed with error (sct=0, sc=8) 00:07:39.123 Write completed with error (sct=0, sc=8) 00:07:39.123 Write completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Write completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Write completed with error (sct=0, sc=8) 00:07:39.123 [2024-11-28 08:06:36.312516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cfc4a0 is same with the state(6) to be set 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Write completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Write completed with error (sct=0, sc=8) 00:07:39.123 Write completed with error (sct=0, sc=8) 00:07:39.123 Write completed with error (sct=0, sc=8) 00:07:39.123 Write completed with error (sct=0, sc=8) 00:07:39.123 Write completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Write completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 [2024-11-28 08:06:36.313032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cfc860 is same with the state(6) to be set 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Write completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Write completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Write completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 [2024-11-28 08:06:36.316214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2d8000d7c0 is same with the state(6) to be set 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Write completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Write completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Write completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Write completed with error (sct=0, sc=8) 00:07:39.123 Write completed with error (sct=0, sc=8) 00:07:39.123 Write completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Write completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Write completed with error (sct=0, sc=8) 00:07:39.123 Read completed with error (sct=0, sc=8) 00:07:39.123 Write completed with error (sct=0, sc=8) 00:07:39.123 [2024-11-28 08:06:36.316476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2d8000d020 is same with the state(6) to be set 00:07:39.123 Initializing NVMe Controllers 00:07:39.123 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:39.123 Controller IO queue size 128, less than required. 00:07:39.123 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:39.123 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:39.123 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:39.123 Initialization complete. Launching workers. 00:07:39.123 ======================================================== 00:07:39.123 Latency(us) 00:07:39.123 Device Information : IOPS MiB/s Average min max 00:07:39.123 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.26 0.08 909493.92 957.08 1007876.59 00:07:39.123 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 167.24 0.08 900847.58 298.10 1012232.53 00:07:39.123 ======================================================== 00:07:39.123 Total : 330.50 0.16 905118.66 298.10 1012232.53 00:07:39.123 00:07:39.123 [2024-11-28 08:06:36.316962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cfd9b0 (9): Bad file descriptor 00:07:39.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:39.123 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.123 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:39.123 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1769045 00:07:39.123 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:39.696 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:39.696 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1769045 00:07:39.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1769045) - No such process 00:07:39.696 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1769045 00:07:39.696 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:39.696 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1769045 00:07:39.696 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:39.696 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.696 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:39.696 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.696 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1769045 00:07:39.696 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:39.696 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:39.696 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:39.696 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:39.697 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:39.697 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.697 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:39.697 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.697 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:39.697 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.697 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:39.697 [2024-11-28 08:06:36.846828] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:39.697 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.697 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.697 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.697 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:39.697 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.697 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1769875 00:07:39.697 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:39.697 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:39.697 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1769875 00:07:39.697 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:39.697 [2024-11-28 08:06:36.945817] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:40.268 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:40.268 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1769875 00:07:40.268 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:40.840 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:40.840 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1769875 00:07:40.840 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:41.102 08:06:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:41.102 08:06:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1769875 00:07:41.102 08:06:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:41.673 08:06:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:41.673 08:06:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1769875 00:07:41.673 08:06:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:42.244 08:06:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:42.244 08:06:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1769875 00:07:42.245 08:06:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:42.818 08:06:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:42.818 08:06:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1769875 00:07:42.818 08:06:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:42.818 Initializing NVMe Controllers 00:07:42.818 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:42.818 Controller IO queue size 128, less than required. 00:07:42.818 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:42.818 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:42.818 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:42.818 Initialization complete. Launching workers. 00:07:42.818 ======================================================== 00:07:42.818 Latency(us) 00:07:42.818 Device Information : IOPS MiB/s Average min max 00:07:42.818 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001822.92 1000154.40 1004913.36 00:07:42.818 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002622.43 1000350.03 1007425.76 00:07:42.818 ======================================================== 00:07:42.818 Total : 256.00 0.12 1002222.68 1000154.40 1007425.76 00:07:42.818 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1769875 00:07:43.390 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1769875) - No such process 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1769875 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:43.390 rmmod nvme_tcp 00:07:43.390 rmmod nvme_fabrics 00:07:43.390 rmmod nvme_keyring 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1768936 ']' 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1768936 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1768936 ']' 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1768936 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1768936 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1768936' 00:07:43.390 killing process with pid 1768936 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1768936 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1768936 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:43.390 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.936 08:06:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:45.936 00:07:45.936 real 0m18.392s 00:07:45.936 user 0m30.700s 00:07:45.936 sys 0m6.949s 00:07:45.936 08:06:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.936 08:06:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:45.936 ************************************ 00:07:45.936 END TEST nvmf_delete_subsystem 00:07:45.936 ************************************ 00:07:45.936 08:06:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:45.936 08:06:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:45.936 08:06:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.936 08:06:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:45.936 ************************************ 00:07:45.936 START TEST nvmf_host_management 00:07:45.936 ************************************ 00:07:45.936 08:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:45.936 * Looking for test storage... 00:07:45.936 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.936 08:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:45.936 08:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:07:45.936 08:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:45.936 08:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:45.936 08:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.936 08:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.936 08:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.936 08:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.936 08:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.936 08:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.936 08:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.936 08:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.936 08:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.936 08:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.936 08:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.936 08:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:45.936 08:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:45.936 08:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.936 08:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.936 08:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:45.936 08:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:45.936 08:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.936 08:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:45.936 08:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.936 08:06:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:45.936 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:45.936 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.936 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:45.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.937 --rc genhtml_branch_coverage=1 00:07:45.937 --rc genhtml_function_coverage=1 00:07:45.937 --rc genhtml_legend=1 00:07:45.937 --rc geninfo_all_blocks=1 00:07:45.937 --rc geninfo_unexecuted_blocks=1 00:07:45.937 00:07:45.937 ' 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:45.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.937 --rc genhtml_branch_coverage=1 00:07:45.937 --rc genhtml_function_coverage=1 00:07:45.937 --rc genhtml_legend=1 00:07:45.937 --rc geninfo_all_blocks=1 00:07:45.937 --rc geninfo_unexecuted_blocks=1 00:07:45.937 00:07:45.937 ' 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:45.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.937 --rc genhtml_branch_coverage=1 00:07:45.937 --rc genhtml_function_coverage=1 00:07:45.937 --rc genhtml_legend=1 00:07:45.937 --rc geninfo_all_blocks=1 00:07:45.937 --rc geninfo_unexecuted_blocks=1 00:07:45.937 00:07:45.937 ' 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:45.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.937 --rc genhtml_branch_coverage=1 00:07:45.937 --rc genhtml_function_coverage=1 00:07:45.937 --rc genhtml_legend=1 00:07:45.937 --rc geninfo_all_blocks=1 00:07:45.937 --rc geninfo_unexecuted_blocks=1 00:07:45.937 00:07:45.937 ' 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:45.937 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:45.937 08:06:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.087 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:54.087 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:54.087 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:54.087 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:54.087 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:54.087 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:54.087 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:54.087 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:54.087 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:54.087 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:54.087 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:54.087 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:54.087 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:54.087 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:54.087 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:54.087 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:54.087 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:54.087 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:54.087 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:54.087 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:54.087 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:54.087 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:54.087 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:54.087 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:54.087 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:54.088 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:54.088 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:54.088 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:54.088 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:54.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:54.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:07:54.088 00:07:54.088 --- 10.0.0.2 ping statistics --- 00:07:54.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.088 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:54.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:54.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:07:54.088 00:07:54.088 --- 10.0.0.1 ping statistics --- 00:07:54.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.088 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1774825 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1774825 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1774825 ']' 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.088 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.089 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.089 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.089 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.089 [2024-11-28 08:06:50.673821] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:07:54.089 [2024-11-28 08:06:50.673887] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.089 [2024-11-28 08:06:50.772869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:54.089 [2024-11-28 08:06:50.826510] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:54.089 [2024-11-28 08:06:50.826564] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:54.089 [2024-11-28 08:06:50.826573] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:54.089 [2024-11-28 08:06:50.826580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:54.089 [2024-11-28 08:06:50.826587] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:54.089 [2024-11-28 08:06:50.829009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.089 [2024-11-28 08:06:50.829186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:54.089 [2024-11-28 08:06:50.829354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:54.089 [2024-11-28 08:06:50.829354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.351 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:54.351 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:54.351 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:54.351 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:54.351 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.351 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:54.351 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:54.351 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.351 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.351 [2024-11-28 08:06:51.551139] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:54.351 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.351 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:54.351 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:54.351 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.351 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:54.351 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:54.351 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:54.351 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.351 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.351 Malloc0 00:07:54.351 [2024-11-28 08:06:51.631442] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:54.612 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.612 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:54.612 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:54.612 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.612 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1775046 00:07:54.612 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1775046 /var/tmp/bdevperf.sock 00:07:54.612 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1775046 ']' 00:07:54.612 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:54.612 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.612 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:54.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:54.612 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:54.612 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.612 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:54.612 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.612 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:54.612 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:54.612 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:54.612 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:54.612 { 00:07:54.612 "params": { 00:07:54.612 "name": "Nvme$subsystem", 00:07:54.612 "trtype": "$TEST_TRANSPORT", 00:07:54.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:54.612 "adrfam": "ipv4", 00:07:54.612 "trsvcid": "$NVMF_PORT", 00:07:54.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:54.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:54.612 "hdgst": ${hdgst:-false}, 00:07:54.612 "ddgst": ${ddgst:-false} 00:07:54.612 }, 00:07:54.612 "method": "bdev_nvme_attach_controller" 00:07:54.612 } 00:07:54.612 EOF 00:07:54.612 )") 00:07:54.612 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:54.612 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:54.612 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:54.612 08:06:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:54.612 "params": { 00:07:54.612 "name": "Nvme0", 00:07:54.612 "trtype": "tcp", 00:07:54.612 "traddr": "10.0.0.2", 00:07:54.612 "adrfam": "ipv4", 00:07:54.612 "trsvcid": "4420", 00:07:54.612 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:54.613 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:54.613 "hdgst": false, 00:07:54.613 "ddgst": false 00:07:54.613 }, 00:07:54.613 "method": "bdev_nvme_attach_controller" 00:07:54.613 }' 00:07:54.613 [2024-11-28 08:06:51.742206] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:07:54.613 [2024-11-28 08:06:51.742276] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1775046 ] 00:07:54.613 [2024-11-28 08:06:51.835168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.613 [2024-11-28 08:06:51.888545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.185 Running I/O for 10 seconds... 00:07:55.448 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.448 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:55.448 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:55.448 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.448 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.448 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.448 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:55.448 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:55.448 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:55.448 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:55.448 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:55.448 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:55.448 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:55.448 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:55.448 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:55.449 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:55.449 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.449 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.449 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.449 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:07:55.449 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:07:55.449 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:55.449 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:55.449 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:55.449 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:55.449 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.449 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.449 [2024-11-28 08:06:52.653414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.449 [2024-11-28 08:06:52.653477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.449 [2024-11-28 08:06:52.653498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.449 [2024-11-28 08:06:52.653507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.449 [2024-11-28 08:06:52.653518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.449 [2024-11-28 08:06:52.653527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.449 [2024-11-28 08:06:52.653536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.449 [2024-11-28 08:06:52.653554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.449 [2024-11-28 08:06:52.653564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.449 [2024-11-28 08:06:52.653572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.449 [2024-11-28 08:06:52.653582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.449 [2024-11-28 08:06:52.653595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.449 [2024-11-28 08:06:52.653605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.449 [2024-11-28 08:06:52.653613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.449 [2024-11-28 08:06:52.653623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.449 [2024-11-28 08:06:52.653631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.449 [2024-11-28 08:06:52.653641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.449 [2024-11-28 08:06:52.653649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.449 [2024-11-28 08:06:52.653659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.449 [2024-11-28 08:06:52.653666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.449 [2024-11-28 08:06:52.653676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.449 [2024-11-28 08:06:52.653684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.449 [2024-11-28 08:06:52.653694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.449 [2024-11-28 08:06:52.653703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.449 [2024-11-28 08:06:52.653712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.449 [2024-11-28 08:06:52.653720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.449 [2024-11-28 08:06:52.653729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.449 [2024-11-28 08:06:52.653737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.449 [2024-11-28 08:06:52.653747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.449 [2024-11-28 08:06:52.653755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.449 [2024-11-28 08:06:52.653764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.449 [2024-11-28 08:06:52.653772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.449 [2024-11-28 08:06:52.653785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.449 [2024-11-28 08:06:52.653793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.449 [2024-11-28 08:06:52.653804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.449 [2024-11-28 08:06:52.653812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.449 [2024-11-28 08:06:52.653823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.449 [2024-11-28 08:06:52.653831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.449 [2024-11-28 08:06:52.653841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.449 [2024-11-28 08:06:52.653849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.449 [2024-11-28 08:06:52.653860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.449 [2024-11-28 08:06:52.653868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.449 [2024-11-28 08:06:52.653878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.449 [2024-11-28 08:06:52.653887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.449 [2024-11-28 08:06:52.653896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.449 [2024-11-28 08:06:52.653904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.449 [2024-11-28 08:06:52.653913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.449 [2024-11-28 08:06:52.653921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.449 [2024-11-28 08:06:52.653931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.449 [2024-11-28 08:06:52.653939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.449 [2024-11-28 08:06:52.653948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.449 [2024-11-28 08:06:52.653955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.449 [2024-11-28 08:06:52.653965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.449 [2024-11-28 08:06:52.653972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.449 [2024-11-28 08:06:52.653982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.449 [2024-11-28 08:06:52.653989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.449 [2024-11-28 08:06:52.653999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.449 [2024-11-28 08:06:52.654010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.449 [2024-11-28 08:06:52.654020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.449 [2024-11-28 08:06:52.654027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.449 [2024-11-28 08:06:52.654037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.449 [2024-11-28 08:06:52.654044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.449 [2024-11-28 08:06:52.654054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.449 [2024-11-28 08:06:52.654061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.449 [2024-11-28 08:06:52.654071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.449 [2024-11-28 08:06:52.654079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.449 [2024-11-28 08:06:52.654089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.449 [2024-11-28 08:06:52.654097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.449 [2024-11-28 08:06:52.654107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.450 [2024-11-28 08:06:52.654114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.450 [2024-11-28 08:06:52.654125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.450 [2024-11-28 08:06:52.654132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.450 [2024-11-28 08:06:52.654142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.450 [2024-11-28 08:06:52.654149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.450 [2024-11-28 08:06:52.654167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.450 [2024-11-28 08:06:52.654175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.450 [2024-11-28 08:06:52.654185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.450 [2024-11-28 08:06:52.654192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.450 [2024-11-28 08:06:52.654202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.450 [2024-11-28 08:06:52.654210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.450 [2024-11-28 08:06:52.654220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.450 [2024-11-28 08:06:52.654228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.450 [2024-11-28 08:06:52.654240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.450 [2024-11-28 08:06:52.654248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.450 [2024-11-28 08:06:52.654258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.450 [2024-11-28 08:06:52.654265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.450 [2024-11-28 08:06:52.654275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.450 [2024-11-28 08:06:52.654282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.450 [2024-11-28 08:06:52.654292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.450 [2024-11-28 08:06:52.654300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.450 [2024-11-28 08:06:52.654310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.450 [2024-11-28 08:06:52.654318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.450 [2024-11-28 08:06:52.654327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.450 [2024-11-28 08:06:52.654334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.450 [2024-11-28 08:06:52.654344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.450 [2024-11-28 08:06:52.654352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.450 [2024-11-28 08:06:52.654362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.450 [2024-11-28 08:06:52.654369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.450 [2024-11-28 08:06:52.654379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.450 [2024-11-28 08:06:52.654386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.450 [2024-11-28 08:06:52.654396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.450 [2024-11-28 08:06:52.654404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.450 [2024-11-28 08:06:52.654413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.450 [2024-11-28 08:06:52.654420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.450 [2024-11-28 08:06:52.654430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.450 [2024-11-28 08:06:52.654437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.450 [2024-11-28 08:06:52.654447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.450 [2024-11-28 08:06:52.654456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.450 [2024-11-28 08:06:52.654466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.450 [2024-11-28 08:06:52.654474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.450 [2024-11-28 08:06:52.654483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.450 [2024-11-28 08:06:52.654490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.450 [2024-11-28 08:06:52.654500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.450 [2024-11-28 08:06:52.654507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.450 [2024-11-28 08:06:52.654518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.450 [2024-11-28 08:06:52.654525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.450 [2024-11-28 08:06:52.654535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.450 [2024-11-28 08:06:52.654542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.450 [2024-11-28 08:06:52.654551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.450 [2024-11-28 08:06:52.654563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.450 [2024-11-28 08:06:52.654573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.450 [2024-11-28 08:06:52.654580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.450 [2024-11-28 08:06:52.654590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.450 [2024-11-28 08:06:52.654597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.450 [2024-11-28 08:06:52.654607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.450 [2024-11-28 08:06:52.654615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.450 [2024-11-28 08:06:52.654624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.450 [2024-11-28 08:06:52.654632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.450 [2024-11-28 08:06:52.654641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208dee0 is same with the state(6) to be set 00:07:55.450 [2024-11-28 08:06:52.655968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:55.450 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.450 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:55.450 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.450 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.450 task offset: 89984 on job bdev=Nvme0n1 fails 00:07:55.450 00:07:55.450 Latency(us) 00:07:55.450 [2024-11-28T07:06:52.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.450 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:55.450 Job: Nvme0n1 ended in about 0.48 seconds with error 00:07:55.450 Verification LBA range: start 0x0 length 0x400 00:07:55.450 Nvme0n1 : 0.48 1340.74 83.80 134.07 0.00 42218.69 5925.55 36700.16 00:07:55.450 [2024-11-28T07:06:52.739Z] =================================================================================================================== 00:07:55.450 [2024-11-28T07:06:52.739Z] Total : 1340.74 83.80 134.07 0.00 42218.69 5925.55 36700.16 00:07:55.450 [2024-11-28 08:06:52.658269] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:55.450 [2024-11-28 08:06:52.658313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e75010 (9): Bad file descriptor 00:07:55.450 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.450 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:55.450 [2024-11-28 08:06:52.719323] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:56.393 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1775046 00:07:56.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1775046) - No such process 00:07:56.393 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:56.394 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:56.394 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:56.394 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:56.394 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:56.394 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:56.394 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:56.394 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:56.394 { 00:07:56.394 "params": { 00:07:56.394 "name": "Nvme$subsystem", 00:07:56.394 "trtype": "$TEST_TRANSPORT", 00:07:56.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:56.394 "adrfam": "ipv4", 00:07:56.394 "trsvcid": "$NVMF_PORT", 00:07:56.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:56.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:56.394 "hdgst": ${hdgst:-false}, 00:07:56.394 "ddgst": ${ddgst:-false} 00:07:56.394 }, 00:07:56.394 "method": "bdev_nvme_attach_controller" 00:07:56.394 } 00:07:56.394 EOF 00:07:56.394 )") 00:07:56.655 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:56.655 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:56.655 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:56.655 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:56.655 "params": { 00:07:56.655 "name": "Nvme0", 00:07:56.655 "trtype": "tcp", 00:07:56.655 "traddr": "10.0.0.2", 00:07:56.655 "adrfam": "ipv4", 00:07:56.655 "trsvcid": "4420", 00:07:56.655 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:56.655 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:56.655 "hdgst": false, 00:07:56.655 "ddgst": false 00:07:56.655 }, 00:07:56.655 "method": "bdev_nvme_attach_controller" 00:07:56.655 }' 00:07:56.655 [2024-11-28 08:06:53.729213] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:07:56.655 [2024-11-28 08:06:53.729267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1775401 ] 00:07:56.655 [2024-11-28 08:06:53.817454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.655 [2024-11-28 08:06:53.852208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.228 Running I/O for 1 seconds... 00:07:58.174 2033.00 IOPS, 127.06 MiB/s 00:07:58.174 Latency(us) 00:07:58.174 [2024-11-28T07:06:55.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.174 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:58.174 Verification LBA range: start 0x0 length 0x400 00:07:58.174 Nvme0n1 : 1.01 2079.26 129.95 0.00 0.00 30104.14 1747.63 32331.09 00:07:58.174 [2024-11-28T07:06:55.463Z] =================================================================================================================== 00:07:58.174 [2024-11-28T07:06:55.463Z] Total : 2079.26 129.95 0.00 0.00 30104.14 1747.63 32331.09 00:07:58.174 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:58.174 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:58.174 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:58.174 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:58.174 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:58.174 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:58.175 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:58.175 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:58.175 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:58.175 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:58.175 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:58.175 rmmod nvme_tcp 00:07:58.175 rmmod nvme_fabrics 00:07:58.175 rmmod nvme_keyring 00:07:58.175 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:58.175 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:58.175 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:58.175 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1774825 ']' 00:07:58.175 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1774825 00:07:58.175 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1774825 ']' 00:07:58.175 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1774825 00:07:58.175 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:58.175 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:58.175 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1774825 00:07:58.436 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:58.436 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:58.436 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1774825' 00:07:58.436 killing process with pid 1774825 00:07:58.436 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1774825 00:07:58.436 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1774825 00:07:58.436 [2024-11-28 08:06:55.567375] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:58.436 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:58.436 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:58.436 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:58.436 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:58.436 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:58.436 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:58.436 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:58.436 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:58.436 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:58.436 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.436 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:58.436 08:06:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:00.983 00:08:00.983 real 0m14.868s 00:08:00.983 user 0m23.866s 00:08:00.983 sys 0m6.866s 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.983 ************************************ 00:08:00.983 END TEST nvmf_host_management 00:08:00.983 ************************************ 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:00.983 ************************************ 00:08:00.983 START TEST nvmf_lvol 00:08:00.983 ************************************ 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:00.983 * Looking for test storage... 00:08:00.983 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:00.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.983 --rc genhtml_branch_coverage=1 00:08:00.983 --rc genhtml_function_coverage=1 00:08:00.983 --rc genhtml_legend=1 00:08:00.983 --rc geninfo_all_blocks=1 00:08:00.983 --rc geninfo_unexecuted_blocks=1 00:08:00.983 00:08:00.983 ' 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:00.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.983 --rc genhtml_branch_coverage=1 00:08:00.983 --rc genhtml_function_coverage=1 00:08:00.983 --rc genhtml_legend=1 00:08:00.983 --rc geninfo_all_blocks=1 00:08:00.983 --rc geninfo_unexecuted_blocks=1 00:08:00.983 00:08:00.983 ' 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:00.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.983 --rc genhtml_branch_coverage=1 00:08:00.983 --rc genhtml_function_coverage=1 00:08:00.983 --rc genhtml_legend=1 00:08:00.983 --rc geninfo_all_blocks=1 00:08:00.983 --rc geninfo_unexecuted_blocks=1 00:08:00.983 00:08:00.983 ' 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:00.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.983 --rc genhtml_branch_coverage=1 00:08:00.983 --rc genhtml_function_coverage=1 00:08:00.983 --rc genhtml_legend=1 00:08:00.983 --rc geninfo_all_blocks=1 00:08:00.983 --rc geninfo_unexecuted_blocks=1 00:08:00.983 00:08:00.983 ' 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.983 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:00.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:00.984 08:06:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:09.146 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:09.146 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:09.146 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:09.146 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:09.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:09.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:08:09.146 00:08:09.146 --- 10.0.0.2 ping statistics --- 00:08:09.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.146 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:08:09.146 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:09.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:09.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:08:09.147 00:08:09.147 --- 10.0.0.1 ping statistics --- 00:08:09.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.147 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:08:09.147 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:09.147 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:09.147 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:09.147 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:09.147 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:09.147 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:09.147 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:09.147 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:09.147 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:09.147 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:09.147 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:09.147 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:09.147 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:09.147 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1780084 00:08:09.147 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1780084 00:08:09.147 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:09.147 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1780084 ']' 00:08:09.147 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.147 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.147 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.147 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.147 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:09.147 [2024-11-28 08:07:05.626359] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:08:09.147 [2024-11-28 08:07:05.626436] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.147 [2024-11-28 08:07:05.727037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:09.147 [2024-11-28 08:07:05.778502] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.147 [2024-11-28 08:07:05.778558] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.147 [2024-11-28 08:07:05.778566] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:09.147 [2024-11-28 08:07:05.778574] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:09.147 [2024-11-28 08:07:05.778580] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.147 [2024-11-28 08:07:05.780461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.147 [2024-11-28 08:07:05.780621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.147 [2024-11-28 08:07:05.780621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:09.408 08:07:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.408 08:07:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:09.408 08:07:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:09.408 08:07:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:09.408 08:07:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:09.408 08:07:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:09.408 08:07:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:09.408 [2024-11-28 08:07:06.662131] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.669 08:07:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:09.669 08:07:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:09.669 08:07:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:09.931 08:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:09.931 08:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:10.191 08:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:10.452 08:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f55bd1a6-8a31-4019-99b5-4ba0f196e50f 00:08:10.452 08:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f55bd1a6-8a31-4019-99b5-4ba0f196e50f lvol 20 00:08:10.712 08:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=88dfe43c-9013-42d5-9031-80f577d76bf0 00:08:10.712 08:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:10.712 08:07:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 88dfe43c-9013-42d5-9031-80f577d76bf0 00:08:10.972 08:07:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:11.232 [2024-11-28 08:07:08.274759] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:11.232 08:07:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:11.232 08:07:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1780783 00:08:11.232 08:07:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:11.232 08:07:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:12.615 08:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 88dfe43c-9013-42d5-9031-80f577d76bf0 MY_SNAPSHOT 00:08:12.615 08:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=6c3cd6bd-a7ae-4a44-b272-d245244e07d4 00:08:12.615 08:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 88dfe43c-9013-42d5-9031-80f577d76bf0 30 00:08:12.876 08:07:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 6c3cd6bd-a7ae-4a44-b272-d245244e07d4 MY_CLONE 00:08:12.876 08:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d71da6be-6ad7-44e7-9ec9-caa6fe585de5 00:08:12.876 08:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d71da6be-6ad7-44e7-9ec9-caa6fe585de5 00:08:13.447 08:07:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1780783 00:08:21.651 Initializing NVMe Controllers 00:08:21.651 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:21.651 Controller IO queue size 128, less than required. 00:08:21.651 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:21.651 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:21.651 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:21.651 Initialization complete. Launching workers. 00:08:21.651 ======================================================== 00:08:21.651 Latency(us) 00:08:21.651 Device Information : IOPS MiB/s Average min max 00:08:21.651 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16510.00 64.49 7756.21 1600.56 59132.64 00:08:21.651 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17272.00 67.47 7412.66 1068.11 45404.92 00:08:21.651 ======================================================== 00:08:21.651 Total : 33782.00 131.96 7580.56 1068.11 59132.64 00:08:21.651 00:08:21.651 08:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:21.911 08:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 88dfe43c-9013-42d5-9031-80f577d76bf0 00:08:21.911 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f55bd1a6-8a31-4019-99b5-4ba0f196e50f 00:08:22.171 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:22.171 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:22.171 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:22.171 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:22.171 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:22.171 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:22.171 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:22.171 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:22.171 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:22.171 rmmod nvme_tcp 00:08:22.171 rmmod nvme_fabrics 00:08:22.171 rmmod nvme_keyring 00:08:22.171 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:22.171 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:22.171 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:22.171 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1780084 ']' 00:08:22.171 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1780084 00:08:22.171 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1780084 ']' 00:08:22.171 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1780084 00:08:22.171 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:22.171 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:22.171 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1780084 00:08:22.431 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:22.431 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:22.431 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1780084' 00:08:22.431 killing process with pid 1780084 00:08:22.431 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1780084 00:08:22.431 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1780084 00:08:22.431 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:22.431 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:22.431 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:22.431 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:22.431 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:22.431 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:22.431 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:22.431 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:22.431 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:22.431 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.431 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:22.431 08:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:24.979 00:08:24.979 real 0m23.922s 00:08:24.979 user 1m4.355s 00:08:24.979 sys 0m8.829s 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:24.979 ************************************ 00:08:24.979 END TEST nvmf_lvol 00:08:24.979 ************************************ 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:24.979 ************************************ 00:08:24.979 START TEST nvmf_lvs_grow 00:08:24.979 ************************************ 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:24.979 * Looking for test storage... 00:08:24.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:24.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.979 --rc genhtml_branch_coverage=1 00:08:24.979 --rc genhtml_function_coverage=1 00:08:24.979 --rc genhtml_legend=1 00:08:24.979 --rc geninfo_all_blocks=1 00:08:24.979 --rc geninfo_unexecuted_blocks=1 00:08:24.979 00:08:24.979 ' 00:08:24.979 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:24.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.980 --rc genhtml_branch_coverage=1 00:08:24.980 --rc genhtml_function_coverage=1 00:08:24.980 --rc genhtml_legend=1 00:08:24.980 --rc geninfo_all_blocks=1 00:08:24.980 --rc geninfo_unexecuted_blocks=1 00:08:24.980 00:08:24.980 ' 00:08:24.980 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:24.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.980 --rc genhtml_branch_coverage=1 00:08:24.980 --rc genhtml_function_coverage=1 00:08:24.980 --rc genhtml_legend=1 00:08:24.980 --rc geninfo_all_blocks=1 00:08:24.980 --rc geninfo_unexecuted_blocks=1 00:08:24.980 00:08:24.980 ' 00:08:24.980 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:24.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.980 --rc genhtml_branch_coverage=1 00:08:24.980 --rc genhtml_function_coverage=1 00:08:24.980 --rc genhtml_legend=1 00:08:24.980 --rc geninfo_all_blocks=1 00:08:24.980 --rc geninfo_unexecuted_blocks=1 00:08:24.980 00:08:24.980 ' 00:08:24.980 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:24.980 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:24.980 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:24.980 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:24.980 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:24.980 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:24.980 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:24.980 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:24.980 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:24.980 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:24.980 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:24.980 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:24.980 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:24.980 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:24.980 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:24.980 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:24.980 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:24.980 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:24.980 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:24.980 08:07:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:24.980 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:24.980 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:24.980 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:24.980 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.980 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.980 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.980 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:24.980 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.980 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:24.980 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:24.980 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:24.980 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:24.980 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:24.980 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:24.980 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:24.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:24.980 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:24.980 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:24.980 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:24.980 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:24.980 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:24.980 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:24.980 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:24.980 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:24.980 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:24.980 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:24.980 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:24.980 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.980 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:24.980 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.980 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:24.980 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:24.980 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:24.980 08:07:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:33.126 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:33.126 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:33.126 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:33.126 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:33.126 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:33.127 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:33.127 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:33.127 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:33.127 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:33.127 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:33.127 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:33.127 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:33.127 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:33.127 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:33.127 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:33.127 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:33.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:33.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:08:33.127 00:08:33.127 --- 10.0.0.2 ping statistics --- 00:08:33.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.127 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:08:33.127 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:33.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:33.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:08:33.127 00:08:33.127 --- 10.0.0.1 ping statistics --- 00:08:33.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.127 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:08:33.127 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:33.127 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:33.127 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:33.127 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:33.127 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:33.127 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:33.127 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:33.127 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:33.127 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:33.127 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:33.127 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:33.127 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:33.127 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:33.127 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1787160 00:08:33.127 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1787160 00:08:33.127 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:33.127 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1787160 ']' 00:08:33.127 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.127 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.127 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.127 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.127 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:33.127 [2024-11-28 08:07:29.662812] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:08:33.127 [2024-11-28 08:07:29.662880] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.127 [2024-11-28 08:07:29.763878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.127 [2024-11-28 08:07:29.814597] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:33.127 [2024-11-28 08:07:29.814652] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:33.127 [2024-11-28 08:07:29.814660] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:33.127 [2024-11-28 08:07:29.814667] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:33.127 [2024-11-28 08:07:29.814673] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:33.127 [2024-11-28 08:07:29.815408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.389 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:33.389 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:33.389 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:33.389 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:33.389 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:33.389 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.389 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:33.651 [2024-11-28 08:07:30.702945] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.651 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:33.651 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.651 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.651 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:33.651 ************************************ 00:08:33.651 START TEST lvs_grow_clean 00:08:33.651 ************************************ 00:08:33.651 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:33.651 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:33.651 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:33.651 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:33.651 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:33.651 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:33.651 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:33.651 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:33.651 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:33.651 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:33.911 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:33.911 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:34.172 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=4b180785-5012-4bb6-b5d1-c23441bcd9a9 00:08:34.172 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b180785-5012-4bb6-b5d1-c23441bcd9a9 00:08:34.172 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:34.172 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:34.172 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:34.172 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4b180785-5012-4bb6-b5d1-c23441bcd9a9 lvol 150 00:08:34.433 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=e1e1ea96-85eb-4709-827a-e9d787698633 00:08:34.433 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:34.433 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:34.694 [2024-11-28 08:07:31.734074] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:34.694 [2024-11-28 08:07:31.734151] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:34.694 true 00:08:34.694 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b180785-5012-4bb6-b5d1-c23441bcd9a9 00:08:34.694 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:34.694 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:34.694 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:34.955 08:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e1e1ea96-85eb-4709-827a-e9d787698633 00:08:35.216 08:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:35.216 [2024-11-28 08:07:32.432358] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:35.216 08:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:35.477 08:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1787771 00:08:35.477 08:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:35.477 08:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:35.477 08:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1787771 /var/tmp/bdevperf.sock 00:08:35.477 08:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1787771 ']' 00:08:35.477 08:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:35.477 08:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:35.477 08:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:35.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:35.477 08:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:35.477 08:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:35.477 [2024-11-28 08:07:32.667430] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:08:35.477 [2024-11-28 08:07:32.667503] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1787771 ] 00:08:35.477 [2024-11-28 08:07:32.760191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.739 [2024-11-28 08:07:32.812645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.312 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:36.312 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:36.312 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:36.886 Nvme0n1 00:08:36.886 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:36.886 [ 00:08:36.886 { 00:08:36.886 "name": "Nvme0n1", 00:08:36.886 "aliases": [ 00:08:36.886 "e1e1ea96-85eb-4709-827a-e9d787698633" 00:08:36.886 ], 00:08:36.886 "product_name": "NVMe disk", 00:08:36.886 "block_size": 4096, 00:08:36.886 "num_blocks": 38912, 00:08:36.886 "uuid": "e1e1ea96-85eb-4709-827a-e9d787698633", 00:08:36.886 "numa_id": 0, 00:08:36.886 "assigned_rate_limits": { 00:08:36.886 "rw_ios_per_sec": 0, 00:08:36.886 "rw_mbytes_per_sec": 0, 00:08:36.886 "r_mbytes_per_sec": 0, 00:08:36.886 "w_mbytes_per_sec": 0 00:08:36.886 }, 00:08:36.886 "claimed": false, 00:08:36.886 "zoned": false, 00:08:36.886 "supported_io_types": { 00:08:36.886 "read": true, 00:08:36.886 "write": true, 00:08:36.886 "unmap": true, 00:08:36.886 "flush": true, 00:08:36.886 "reset": true, 00:08:36.886 "nvme_admin": true, 00:08:36.886 "nvme_io": true, 00:08:36.886 "nvme_io_md": false, 00:08:36.886 "write_zeroes": true, 00:08:36.886 "zcopy": false, 00:08:36.886 "get_zone_info": false, 00:08:36.886 "zone_management": false, 00:08:36.886 "zone_append": false, 00:08:36.886 "compare": true, 00:08:36.886 "compare_and_write": true, 00:08:36.886 "abort": true, 00:08:36.886 "seek_hole": false, 00:08:36.886 "seek_data": false, 00:08:36.886 "copy": true, 00:08:36.886 "nvme_iov_md": false 00:08:36.886 }, 00:08:36.886 "memory_domains": [ 00:08:36.886 { 00:08:36.886 "dma_device_id": "system", 00:08:36.886 "dma_device_type": 1 00:08:36.886 } 00:08:36.886 ], 00:08:36.886 "driver_specific": { 00:08:36.886 "nvme": [ 00:08:36.886 { 00:08:36.886 "trid": { 00:08:36.886 "trtype": "TCP", 00:08:36.886 "adrfam": "IPv4", 00:08:36.886 "traddr": "10.0.0.2", 00:08:36.886 "trsvcid": "4420", 00:08:36.886 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:36.886 }, 00:08:36.886 "ctrlr_data": { 00:08:36.886 "cntlid": 1, 00:08:36.886 "vendor_id": "0x8086", 00:08:36.886 "model_number": "SPDK bdev Controller", 00:08:36.886 "serial_number": "SPDK0", 00:08:36.886 "firmware_revision": "25.01", 00:08:36.886 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:36.886 "oacs": { 00:08:36.886 "security": 0, 00:08:36.886 "format": 0, 00:08:36.886 "firmware": 0, 00:08:36.886 "ns_manage": 0 00:08:36.886 }, 00:08:36.886 "multi_ctrlr": true, 00:08:36.886 "ana_reporting": false 00:08:36.886 }, 00:08:36.886 "vs": { 00:08:36.886 "nvme_version": "1.3" 00:08:36.886 }, 00:08:36.886 "ns_data": { 00:08:36.886 "id": 1, 00:08:36.886 "can_share": true 00:08:36.886 } 00:08:36.886 } 00:08:36.886 ], 00:08:36.886 "mp_policy": "active_passive" 00:08:36.886 } 00:08:36.886 } 00:08:36.886 ] 00:08:36.886 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1787986 00:08:36.886 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:36.886 08:07:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:37.149 Running I/O for 10 seconds... 00:08:38.090 Latency(us) 00:08:38.090 [2024-11-28T07:07:35.379Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.090 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.090 Nvme0n1 : 1.00 24727.00 96.59 0.00 0.00 0.00 0.00 0.00 00:08:38.090 [2024-11-28T07:07:35.379Z] =================================================================================================================== 00:08:38.090 [2024-11-28T07:07:35.379Z] Total : 24727.00 96.59 0.00 0.00 0.00 0.00 0.00 00:08:38.090 00:08:39.031 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4b180785-5012-4bb6-b5d1-c23441bcd9a9 00:08:39.031 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.031 Nvme0n1 : 2.00 24973.00 97.55 0.00 0.00 0.00 0.00 0.00 00:08:39.031 [2024-11-28T07:07:36.320Z] =================================================================================================================== 00:08:39.031 [2024-11-28T07:07:36.320Z] Total : 24973.00 97.55 0.00 0.00 0.00 0.00 0.00 00:08:39.031 00:08:39.031 true 00:08:39.031 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b180785-5012-4bb6-b5d1-c23441bcd9a9 00:08:39.032 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:39.292 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:39.292 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:39.292 08:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1787986 00:08:40.234 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.234 Nvme0n1 : 3.00 25071.67 97.94 0.00 0.00 0.00 0.00 0.00 00:08:40.234 [2024-11-28T07:07:37.523Z] =================================================================================================================== 00:08:40.234 [2024-11-28T07:07:37.523Z] Total : 25071.67 97.94 0.00 0.00 0.00 0.00 0.00 00:08:40.234 00:08:41.176 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.176 Nvme0n1 : 4.00 25129.75 98.16 0.00 0.00 0.00 0.00 0.00 00:08:41.176 [2024-11-28T07:07:38.465Z] =================================================================================================================== 00:08:41.176 [2024-11-28T07:07:38.465Z] Total : 25129.75 98.16 0.00 0.00 0.00 0.00 0.00 00:08:41.176 00:08:42.119 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.119 Nvme0n1 : 5.00 25172.40 98.33 0.00 0.00 0.00 0.00 0.00 00:08:42.119 [2024-11-28T07:07:39.408Z] =================================================================================================================== 00:08:42.119 [2024-11-28T07:07:39.408Z] Total : 25172.40 98.33 0.00 0.00 0.00 0.00 0.00 00:08:42.119 00:08:43.060 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.060 Nvme0n1 : 6.00 25200.17 98.44 0.00 0.00 0.00 0.00 0.00 00:08:43.060 [2024-11-28T07:07:40.349Z] =================================================================================================================== 00:08:43.060 [2024-11-28T07:07:40.349Z] Total : 25200.17 98.44 0.00 0.00 0.00 0.00 0.00 00:08:43.060 00:08:44.002 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.002 Nvme0n1 : 7.00 25218.86 98.51 0.00 0.00 0.00 0.00 0.00 00:08:44.002 [2024-11-28T07:07:41.291Z] =================================================================================================================== 00:08:44.002 [2024-11-28T07:07:41.291Z] Total : 25218.86 98.51 0.00 0.00 0.00 0.00 0.00 00:08:44.002 00:08:44.944 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.944 Nvme0n1 : 8.00 25242.12 98.60 0.00 0.00 0.00 0.00 0.00 00:08:44.944 [2024-11-28T07:07:42.233Z] =================================================================================================================== 00:08:44.944 [2024-11-28T07:07:42.233Z] Total : 25242.12 98.60 0.00 0.00 0.00 0.00 0.00 00:08:44.944 00:08:46.329 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.329 Nvme0n1 : 9.00 25260.44 98.67 0.00 0.00 0.00 0.00 0.00 00:08:46.329 [2024-11-28T07:07:43.618Z] =================================================================================================================== 00:08:46.329 [2024-11-28T07:07:43.618Z] Total : 25260.44 98.67 0.00 0.00 0.00 0.00 0.00 00:08:46.329 00:08:47.271 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.271 Nvme0n1 : 10.00 25275.10 98.73 0.00 0.00 0.00 0.00 0.00 00:08:47.271 [2024-11-28T07:07:44.560Z] =================================================================================================================== 00:08:47.271 [2024-11-28T07:07:44.560Z] Total : 25275.10 98.73 0.00 0.00 0.00 0.00 0.00 00:08:47.271 00:08:47.271 00:08:47.271 Latency(us) 00:08:47.271 [2024-11-28T07:07:44.560Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.271 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.271 Nvme0n1 : 10.01 25274.29 98.73 0.00 0.00 5061.00 2498.56 11960.32 00:08:47.271 [2024-11-28T07:07:44.560Z] =================================================================================================================== 00:08:47.271 [2024-11-28T07:07:44.560Z] Total : 25274.29 98.73 0.00 0.00 5061.00 2498.56 11960.32 00:08:47.271 { 00:08:47.271 "results": [ 00:08:47.271 { 00:08:47.271 "job": "Nvme0n1", 00:08:47.271 "core_mask": "0x2", 00:08:47.271 "workload": "randwrite", 00:08:47.271 "status": "finished", 00:08:47.271 "queue_depth": 128, 00:08:47.271 "io_size": 4096, 00:08:47.271 "runtime": 10.005386, 00:08:47.271 "iops": 25274.287268876982, 00:08:47.271 "mibps": 98.72768464405071, 00:08:47.271 "io_failed": 0, 00:08:47.271 "io_timeout": 0, 00:08:47.271 "avg_latency_us": 5060.996693385638, 00:08:47.271 "min_latency_us": 2498.56, 00:08:47.271 "max_latency_us": 11960.32 00:08:47.271 } 00:08:47.271 ], 00:08:47.271 "core_count": 1 00:08:47.271 } 00:08:47.271 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1787771 00:08:47.271 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1787771 ']' 00:08:47.271 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1787771 00:08:47.271 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:47.271 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:47.271 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1787771 00:08:47.271 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:47.271 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:47.271 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1787771' 00:08:47.271 killing process with pid 1787771 00:08:47.271 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1787771 00:08:47.271 Received shutdown signal, test time was about 10.000000 seconds 00:08:47.271 00:08:47.271 Latency(us) 00:08:47.271 [2024-11-28T07:07:44.560Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.271 [2024-11-28T07:07:44.560Z] =================================================================================================================== 00:08:47.271 [2024-11-28T07:07:44.560Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:47.271 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1787771 00:08:47.272 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:47.532 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:47.532 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b180785-5012-4bb6-b5d1-c23441bcd9a9 00:08:47.532 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:47.791 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:47.791 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:47.791 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:48.051 [2024-11-28 08:07:45.088454] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:48.051 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b180785-5012-4bb6-b5d1-c23441bcd9a9 00:08:48.051 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:48.051 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b180785-5012-4bb6-b5d1-c23441bcd9a9 00:08:48.051 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:48.051 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.051 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:48.051 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.051 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:48.051 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.051 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:48.051 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:48.051 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b180785-5012-4bb6-b5d1-c23441bcd9a9 00:08:48.051 request: 00:08:48.051 { 00:08:48.051 "uuid": "4b180785-5012-4bb6-b5d1-c23441bcd9a9", 00:08:48.051 "method": "bdev_lvol_get_lvstores", 00:08:48.051 "req_id": 1 00:08:48.051 } 00:08:48.051 Got JSON-RPC error response 00:08:48.051 response: 00:08:48.051 { 00:08:48.051 "code": -19, 00:08:48.051 "message": "No such device" 00:08:48.051 } 00:08:48.051 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:48.051 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:48.051 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:48.051 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:48.051 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:48.311 aio_bdev 00:08:48.311 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e1e1ea96-85eb-4709-827a-e9d787698633 00:08:48.311 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=e1e1ea96-85eb-4709-827a-e9d787698633 00:08:48.311 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.311 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:48.311 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.311 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.311 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:48.571 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e1e1ea96-85eb-4709-827a-e9d787698633 -t 2000 00:08:48.571 [ 00:08:48.571 { 00:08:48.571 "name": "e1e1ea96-85eb-4709-827a-e9d787698633", 00:08:48.571 "aliases": [ 00:08:48.571 "lvs/lvol" 00:08:48.571 ], 00:08:48.571 "product_name": "Logical Volume", 00:08:48.571 "block_size": 4096, 00:08:48.571 "num_blocks": 38912, 00:08:48.571 "uuid": "e1e1ea96-85eb-4709-827a-e9d787698633", 00:08:48.571 "assigned_rate_limits": { 00:08:48.571 "rw_ios_per_sec": 0, 00:08:48.571 "rw_mbytes_per_sec": 0, 00:08:48.571 "r_mbytes_per_sec": 0, 00:08:48.571 "w_mbytes_per_sec": 0 00:08:48.571 }, 00:08:48.571 "claimed": false, 00:08:48.571 "zoned": false, 00:08:48.571 "supported_io_types": { 00:08:48.571 "read": true, 00:08:48.571 "write": true, 00:08:48.571 "unmap": true, 00:08:48.571 "flush": false, 00:08:48.571 "reset": true, 00:08:48.571 "nvme_admin": false, 00:08:48.571 "nvme_io": false, 00:08:48.571 "nvme_io_md": false, 00:08:48.571 "write_zeroes": true, 00:08:48.571 "zcopy": false, 00:08:48.571 "get_zone_info": false, 00:08:48.571 "zone_management": false, 00:08:48.571 "zone_append": false, 00:08:48.571 "compare": false, 00:08:48.571 "compare_and_write": false, 00:08:48.571 "abort": false, 00:08:48.571 "seek_hole": true, 00:08:48.571 "seek_data": true, 00:08:48.571 "copy": false, 00:08:48.571 "nvme_iov_md": false 00:08:48.571 }, 00:08:48.571 "driver_specific": { 00:08:48.571 "lvol": { 00:08:48.571 "lvol_store_uuid": "4b180785-5012-4bb6-b5d1-c23441bcd9a9", 00:08:48.571 "base_bdev": "aio_bdev", 00:08:48.571 "thin_provision": false, 00:08:48.571 "num_allocated_clusters": 38, 00:08:48.571 "snapshot": false, 00:08:48.571 "clone": false, 00:08:48.571 "esnap_clone": false 00:08:48.571 } 00:08:48.571 } 00:08:48.571 } 00:08:48.571 ] 00:08:48.571 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:48.571 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b180785-5012-4bb6-b5d1-c23441bcd9a9 00:08:48.571 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:48.832 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:48.832 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b180785-5012-4bb6-b5d1-c23441bcd9a9 00:08:48.832 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:49.093 08:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:49.093 08:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e1e1ea96-85eb-4709-827a-e9d787698633 00:08:49.093 08:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4b180785-5012-4bb6-b5d1-c23441bcd9a9 00:08:49.354 08:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:49.615 08:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:49.615 00:08:49.615 real 0m15.946s 00:08:49.615 user 0m15.678s 00:08:49.615 sys 0m1.419s 00:08:49.615 08:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.615 08:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:49.615 ************************************ 00:08:49.615 END TEST lvs_grow_clean 00:08:49.615 ************************************ 00:08:49.615 08:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:49.615 08:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:49.615 08:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.615 08:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:49.615 ************************************ 00:08:49.615 START TEST lvs_grow_dirty 00:08:49.615 ************************************ 00:08:49.615 08:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:49.615 08:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:49.615 08:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:49.615 08:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:49.615 08:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:49.615 08:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:49.615 08:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:49.615 08:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:49.615 08:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:49.615 08:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:49.876 08:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:49.876 08:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:50.137 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=21653295-2ec9-4aaa-aad2-d35288df6cdd 00:08:50.137 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21653295-2ec9-4aaa-aad2-d35288df6cdd 00:08:50.137 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:50.137 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:50.137 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:50.137 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 21653295-2ec9-4aaa-aad2-d35288df6cdd lvol 150 00:08:50.398 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=987ab9c5-d847-422e-9667-5711a3cf3202 00:08:50.398 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:50.398 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:50.398 [2024-11-28 08:07:47.681823] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:50.398 [2024-11-28 08:07:47.681867] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:50.398 true 00:08:50.660 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21653295-2ec9-4aaa-aad2-d35288df6cdd 00:08:50.660 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:50.660 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:50.660 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:50.921 08:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 987ab9c5-d847-422e-9667-5711a3cf3202 00:08:50.921 08:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:51.182 [2024-11-28 08:07:48.319731] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:51.182 08:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:51.443 08:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1790971 00:08:51.443 08:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:51.443 08:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:51.443 08:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1790971 /var/tmp/bdevperf.sock 00:08:51.443 08:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1790971 ']' 00:08:51.443 08:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:51.443 08:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.443 08:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:51.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:51.443 08:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.443 08:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:51.443 [2024-11-28 08:07:48.536943] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:08:51.443 [2024-11-28 08:07:48.536996] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1790971 ] 00:08:51.443 [2024-11-28 08:07:48.618619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.443 [2024-11-28 08:07:48.648348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.388 08:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:52.388 08:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:52.388 08:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:52.388 Nvme0n1 00:08:52.388 08:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:52.649 [ 00:08:52.649 { 00:08:52.649 "name": "Nvme0n1", 00:08:52.649 "aliases": [ 00:08:52.649 "987ab9c5-d847-422e-9667-5711a3cf3202" 00:08:52.649 ], 00:08:52.649 "product_name": "NVMe disk", 00:08:52.649 "block_size": 4096, 00:08:52.649 "num_blocks": 38912, 00:08:52.649 "uuid": "987ab9c5-d847-422e-9667-5711a3cf3202", 00:08:52.649 "numa_id": 0, 00:08:52.649 "assigned_rate_limits": { 00:08:52.649 "rw_ios_per_sec": 0, 00:08:52.649 "rw_mbytes_per_sec": 0, 00:08:52.649 "r_mbytes_per_sec": 0, 00:08:52.649 "w_mbytes_per_sec": 0 00:08:52.649 }, 00:08:52.649 "claimed": false, 00:08:52.649 "zoned": false, 00:08:52.649 "supported_io_types": { 00:08:52.649 "read": true, 00:08:52.649 "write": true, 00:08:52.649 "unmap": true, 00:08:52.649 "flush": true, 00:08:52.649 "reset": true, 00:08:52.649 "nvme_admin": true, 00:08:52.649 "nvme_io": true, 00:08:52.649 "nvme_io_md": false, 00:08:52.649 "write_zeroes": true, 00:08:52.649 "zcopy": false, 00:08:52.649 "get_zone_info": false, 00:08:52.649 "zone_management": false, 00:08:52.649 "zone_append": false, 00:08:52.649 "compare": true, 00:08:52.649 "compare_and_write": true, 00:08:52.649 "abort": true, 00:08:52.649 "seek_hole": false, 00:08:52.649 "seek_data": false, 00:08:52.649 "copy": true, 00:08:52.649 "nvme_iov_md": false 00:08:52.649 }, 00:08:52.649 "memory_domains": [ 00:08:52.649 { 00:08:52.649 "dma_device_id": "system", 00:08:52.649 "dma_device_type": 1 00:08:52.649 } 00:08:52.649 ], 00:08:52.649 "driver_specific": { 00:08:52.649 "nvme": [ 00:08:52.649 { 00:08:52.649 "trid": { 00:08:52.649 "trtype": "TCP", 00:08:52.649 "adrfam": "IPv4", 00:08:52.649 "traddr": "10.0.0.2", 00:08:52.649 "trsvcid": "4420", 00:08:52.649 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:52.649 }, 00:08:52.649 "ctrlr_data": { 00:08:52.649 "cntlid": 1, 00:08:52.649 "vendor_id": "0x8086", 00:08:52.649 "model_number": "SPDK bdev Controller", 00:08:52.649 "serial_number": "SPDK0", 00:08:52.649 "firmware_revision": "25.01", 00:08:52.649 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:52.649 "oacs": { 00:08:52.649 "security": 0, 00:08:52.649 "format": 0, 00:08:52.649 "firmware": 0, 00:08:52.649 "ns_manage": 0 00:08:52.649 }, 00:08:52.649 "multi_ctrlr": true, 00:08:52.649 "ana_reporting": false 00:08:52.649 }, 00:08:52.649 "vs": { 00:08:52.649 "nvme_version": "1.3" 00:08:52.649 }, 00:08:52.649 "ns_data": { 00:08:52.649 "id": 1, 00:08:52.649 "can_share": true 00:08:52.649 } 00:08:52.649 } 00:08:52.649 ], 00:08:52.649 "mp_policy": "active_passive" 00:08:52.649 } 00:08:52.649 } 00:08:52.649 ] 00:08:52.649 08:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1791239 00:08:52.649 08:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:52.649 08:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:52.649 Running I/O for 10 seconds... 00:08:53.591 Latency(us) 00:08:53.591 [2024-11-28T07:07:50.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.591 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.591 Nvme0n1 : 1.00 24742.00 96.65 0.00 0.00 0.00 0.00 0.00 00:08:53.591 [2024-11-28T07:07:50.880Z] =================================================================================================================== 00:08:53.591 [2024-11-28T07:07:50.880Z] Total : 24742.00 96.65 0.00 0.00 0.00 0.00 0.00 00:08:53.591 00:08:54.534 08:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 21653295-2ec9-4aaa-aad2-d35288df6cdd 00:08:54.534 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.534 Nvme0n1 : 2.00 24968.00 97.53 0.00 0.00 0.00 0.00 0.00 00:08:54.534 [2024-11-28T07:07:51.823Z] =================================================================================================================== 00:08:54.534 [2024-11-28T07:07:51.823Z] Total : 24968.00 97.53 0.00 0.00 0.00 0.00 0.00 00:08:54.534 00:08:54.794 true 00:08:54.794 08:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21653295-2ec9-4aaa-aad2-d35288df6cdd 00:08:54.794 08:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:55.055 08:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:55.055 08:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:55.055 08:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1791239 00:08:55.626 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.626 Nvme0n1 : 3.00 25071.00 97.93 0.00 0.00 0.00 0.00 0.00 00:08:55.626 [2024-11-28T07:07:52.915Z] =================================================================================================================== 00:08:55.626 [2024-11-28T07:07:52.915Z] Total : 25071.00 97.93 0.00 0.00 0.00 0.00 0.00 00:08:55.626 00:08:56.566 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.566 Nvme0n1 : 4.00 25127.50 98.15 0.00 0.00 0.00 0.00 0.00 00:08:56.566 [2024-11-28T07:07:53.855Z] =================================================================================================================== 00:08:56.566 [2024-11-28T07:07:53.855Z] Total : 25127.50 98.15 0.00 0.00 0.00 0.00 0.00 00:08:56.566 00:08:57.950 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.950 Nvme0n1 : 5.00 25180.00 98.36 0.00 0.00 0.00 0.00 0.00 00:08:57.950 [2024-11-28T07:07:55.240Z] =================================================================================================================== 00:08:57.951 [2024-11-28T07:07:55.240Z] Total : 25180.00 98.36 0.00 0.00 0.00 0.00 0.00 00:08:57.951 00:08:58.892 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.892 Nvme0n1 : 6.00 25217.50 98.51 0.00 0.00 0.00 0.00 0.00 00:08:58.892 [2024-11-28T07:07:56.181Z] =================================================================================================================== 00:08:58.892 [2024-11-28T07:07:56.181Z] Total : 25217.50 98.51 0.00 0.00 0.00 0.00 0.00 00:08:58.892 00:08:59.834 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.834 Nvme0n1 : 7.00 25235.00 98.57 0.00 0.00 0.00 0.00 0.00 00:08:59.834 [2024-11-28T07:07:57.123Z] =================================================================================================================== 00:08:59.834 [2024-11-28T07:07:57.123Z] Total : 25235.00 98.57 0.00 0.00 0.00 0.00 0.00 00:08:59.834 00:09:00.776 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.776 Nvme0n1 : 8.00 25256.50 98.66 0.00 0.00 0.00 0.00 0.00 00:09:00.776 [2024-11-28T07:07:58.065Z] =================================================================================================================== 00:09:00.776 [2024-11-28T07:07:58.065Z] Total : 25256.50 98.66 0.00 0.00 0.00 0.00 0.00 00:09:00.776 00:09:01.716 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.716 Nvme0n1 : 9.00 25273.11 98.72 0.00 0.00 0.00 0.00 0.00 00:09:01.716 [2024-11-28T07:07:59.005Z] =================================================================================================================== 00:09:01.716 [2024-11-28T07:07:59.005Z] Total : 25273.11 98.72 0.00 0.00 0.00 0.00 0.00 00:09:01.716 00:09:02.656 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.656 Nvme0n1 : 10.00 25286.70 98.78 0.00 0.00 0.00 0.00 0.00 00:09:02.656 [2024-11-28T07:07:59.945Z] =================================================================================================================== 00:09:02.656 [2024-11-28T07:07:59.945Z] Total : 25286.70 98.78 0.00 0.00 0.00 0.00 0.00 00:09:02.656 00:09:02.656 00:09:02.656 Latency(us) 00:09:02.656 [2024-11-28T07:07:59.945Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.656 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.656 Nvme0n1 : 10.00 25285.30 98.77 0.00 0.00 5059.25 3003.73 15400.96 00:09:02.656 [2024-11-28T07:07:59.945Z] =================================================================================================================== 00:09:02.656 [2024-11-28T07:07:59.945Z] Total : 25285.30 98.77 0.00 0.00 5059.25 3003.73 15400.96 00:09:02.656 { 00:09:02.656 "results": [ 00:09:02.656 { 00:09:02.656 "job": "Nvme0n1", 00:09:02.656 "core_mask": "0x2", 00:09:02.656 "workload": "randwrite", 00:09:02.656 "status": "finished", 00:09:02.656 "queue_depth": 128, 00:09:02.656 "io_size": 4096, 00:09:02.656 "runtime": 10.003044, 00:09:02.656 "iops": 25285.303153720008, 00:09:02.656 "mibps": 98.77071544421878, 00:09:02.656 "io_failed": 0, 00:09:02.656 "io_timeout": 0, 00:09:02.656 "avg_latency_us": 5059.248091724983, 00:09:02.656 "min_latency_us": 3003.733333333333, 00:09:02.656 "max_latency_us": 15400.96 00:09:02.656 } 00:09:02.656 ], 00:09:02.656 "core_count": 1 00:09:02.656 } 00:09:02.656 08:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1790971 00:09:02.656 08:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1790971 ']' 00:09:02.656 08:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1790971 00:09:02.656 08:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:02.656 08:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:02.656 08:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1790971 00:09:02.656 08:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:02.656 08:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:02.656 08:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1790971' 00:09:02.656 killing process with pid 1790971 00:09:02.656 08:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1790971 00:09:02.656 Received shutdown signal, test time was about 10.000000 seconds 00:09:02.656 00:09:02.656 Latency(us) 00:09:02.656 [2024-11-28T07:07:59.945Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.656 [2024-11-28T07:07:59.945Z] =================================================================================================================== 00:09:02.656 [2024-11-28T07:07:59.945Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:02.656 08:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1790971 00:09:02.918 08:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:02.918 08:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:03.177 08:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21653295-2ec9-4aaa-aad2-d35288df6cdd 00:09:03.177 08:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:03.438 08:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:03.438 08:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:03.438 08:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1787160 00:09:03.438 08:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1787160 00:09:03.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1787160 Killed "${NVMF_APP[@]}" "$@" 00:09:03.438 08:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:03.438 08:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:03.438 08:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:03.438 08:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:03.438 08:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:03.438 08:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1793345 00:09:03.438 08:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1793345 00:09:03.438 08:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:03.438 08:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1793345 ']' 00:09:03.438 08:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.438 08:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:03.438 08:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.438 08:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:03.438 08:08:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:03.438 [2024-11-28 08:08:00.670775] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:09:03.438 [2024-11-28 08:08:00.670834] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.699 [2024-11-28 08:08:00.763591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.699 [2024-11-28 08:08:00.794505] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:03.699 [2024-11-28 08:08:00.794534] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:03.699 [2024-11-28 08:08:00.794539] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:03.699 [2024-11-28 08:08:00.794544] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:03.699 [2024-11-28 08:08:00.794549] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:03.699 [2024-11-28 08:08:00.795013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.270 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:04.270 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:04.270 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:04.270 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:04.270 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:04.270 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:04.270 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:04.530 [2024-11-28 08:08:01.650476] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:04.530 [2024-11-28 08:08:01.650552] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:04.530 [2024-11-28 08:08:01.650574] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:04.530 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:04.530 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 987ab9c5-d847-422e-9667-5711a3cf3202 00:09:04.530 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=987ab9c5-d847-422e-9667-5711a3cf3202 00:09:04.530 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:04.530 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:04.530 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:04.530 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:04.530 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:04.790 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 987ab9c5-d847-422e-9667-5711a3cf3202 -t 2000 00:09:04.790 [ 00:09:04.790 { 00:09:04.790 "name": "987ab9c5-d847-422e-9667-5711a3cf3202", 00:09:04.790 "aliases": [ 00:09:04.790 "lvs/lvol" 00:09:04.790 ], 00:09:04.790 "product_name": "Logical Volume", 00:09:04.790 "block_size": 4096, 00:09:04.790 "num_blocks": 38912, 00:09:04.790 "uuid": "987ab9c5-d847-422e-9667-5711a3cf3202", 00:09:04.790 "assigned_rate_limits": { 00:09:04.790 "rw_ios_per_sec": 0, 00:09:04.790 "rw_mbytes_per_sec": 0, 00:09:04.790 "r_mbytes_per_sec": 0, 00:09:04.790 "w_mbytes_per_sec": 0 00:09:04.790 }, 00:09:04.790 "claimed": false, 00:09:04.790 "zoned": false, 00:09:04.790 "supported_io_types": { 00:09:04.790 "read": true, 00:09:04.790 "write": true, 00:09:04.790 "unmap": true, 00:09:04.790 "flush": false, 00:09:04.790 "reset": true, 00:09:04.790 "nvme_admin": false, 00:09:04.790 "nvme_io": false, 00:09:04.790 "nvme_io_md": false, 00:09:04.790 "write_zeroes": true, 00:09:04.790 "zcopy": false, 00:09:04.790 "get_zone_info": false, 00:09:04.790 "zone_management": false, 00:09:04.790 "zone_append": false, 00:09:04.790 "compare": false, 00:09:04.790 "compare_and_write": false, 00:09:04.790 "abort": false, 00:09:04.790 "seek_hole": true, 00:09:04.790 "seek_data": true, 00:09:04.790 "copy": false, 00:09:04.790 "nvme_iov_md": false 00:09:04.790 }, 00:09:04.791 "driver_specific": { 00:09:04.791 "lvol": { 00:09:04.791 "lvol_store_uuid": "21653295-2ec9-4aaa-aad2-d35288df6cdd", 00:09:04.791 "base_bdev": "aio_bdev", 00:09:04.791 "thin_provision": false, 00:09:04.791 "num_allocated_clusters": 38, 00:09:04.791 "snapshot": false, 00:09:04.791 "clone": false, 00:09:04.791 "esnap_clone": false 00:09:04.791 } 00:09:04.791 } 00:09:04.791 } 00:09:04.791 ] 00:09:04.791 08:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:04.791 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21653295-2ec9-4aaa-aad2-d35288df6cdd 00:09:04.791 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:05.051 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:05.051 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21653295-2ec9-4aaa-aad2-d35288df6cdd 00:09:05.051 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:05.051 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:05.051 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:05.311 [2024-11-28 08:08:02.483069] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:05.312 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21653295-2ec9-4aaa-aad2-d35288df6cdd 00:09:05.312 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:05.312 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21653295-2ec9-4aaa-aad2-d35288df6cdd 00:09:05.312 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:05.312 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:05.312 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:05.312 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:05.312 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:05.312 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:05.312 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:05.312 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:05.312 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21653295-2ec9-4aaa-aad2-d35288df6cdd 00:09:05.572 request: 00:09:05.572 { 00:09:05.572 "uuid": "21653295-2ec9-4aaa-aad2-d35288df6cdd", 00:09:05.572 "method": "bdev_lvol_get_lvstores", 00:09:05.572 "req_id": 1 00:09:05.572 } 00:09:05.572 Got JSON-RPC error response 00:09:05.572 response: 00:09:05.572 { 00:09:05.572 "code": -19, 00:09:05.572 "message": "No such device" 00:09:05.572 } 00:09:05.572 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:05.572 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:05.572 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:05.572 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:05.572 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:05.572 aio_bdev 00:09:05.572 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 987ab9c5-d847-422e-9667-5711a3cf3202 00:09:05.572 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=987ab9c5-d847-422e-9667-5711a3cf3202 00:09:05.572 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:05.572 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:05.572 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:05.572 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:05.572 08:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:05.833 08:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 987ab9c5-d847-422e-9667-5711a3cf3202 -t 2000 00:09:06.093 [ 00:09:06.094 { 00:09:06.094 "name": "987ab9c5-d847-422e-9667-5711a3cf3202", 00:09:06.094 "aliases": [ 00:09:06.094 "lvs/lvol" 00:09:06.094 ], 00:09:06.094 "product_name": "Logical Volume", 00:09:06.094 "block_size": 4096, 00:09:06.094 "num_blocks": 38912, 00:09:06.094 "uuid": "987ab9c5-d847-422e-9667-5711a3cf3202", 00:09:06.094 "assigned_rate_limits": { 00:09:06.094 "rw_ios_per_sec": 0, 00:09:06.094 "rw_mbytes_per_sec": 0, 00:09:06.094 "r_mbytes_per_sec": 0, 00:09:06.094 "w_mbytes_per_sec": 0 00:09:06.094 }, 00:09:06.094 "claimed": false, 00:09:06.094 "zoned": false, 00:09:06.094 "supported_io_types": { 00:09:06.094 "read": true, 00:09:06.094 "write": true, 00:09:06.094 "unmap": true, 00:09:06.094 "flush": false, 00:09:06.094 "reset": true, 00:09:06.094 "nvme_admin": false, 00:09:06.094 "nvme_io": false, 00:09:06.094 "nvme_io_md": false, 00:09:06.094 "write_zeroes": true, 00:09:06.094 "zcopy": false, 00:09:06.094 "get_zone_info": false, 00:09:06.094 "zone_management": false, 00:09:06.094 "zone_append": false, 00:09:06.094 "compare": false, 00:09:06.094 "compare_and_write": false, 00:09:06.094 "abort": false, 00:09:06.094 "seek_hole": true, 00:09:06.094 "seek_data": true, 00:09:06.094 "copy": false, 00:09:06.094 "nvme_iov_md": false 00:09:06.094 }, 00:09:06.094 "driver_specific": { 00:09:06.094 "lvol": { 00:09:06.094 "lvol_store_uuid": "21653295-2ec9-4aaa-aad2-d35288df6cdd", 00:09:06.094 "base_bdev": "aio_bdev", 00:09:06.094 "thin_provision": false, 00:09:06.094 "num_allocated_clusters": 38, 00:09:06.094 "snapshot": false, 00:09:06.094 "clone": false, 00:09:06.094 "esnap_clone": false 00:09:06.094 } 00:09:06.094 } 00:09:06.094 } 00:09:06.094 ] 00:09:06.094 08:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:06.094 08:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21653295-2ec9-4aaa-aad2-d35288df6cdd 00:09:06.094 08:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:06.094 08:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:06.094 08:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 21653295-2ec9-4aaa-aad2-d35288df6cdd 00:09:06.094 08:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:06.354 08:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:06.354 08:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 987ab9c5-d847-422e-9667-5711a3cf3202 00:09:06.613 08:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 21653295-2ec9-4aaa-aad2-d35288df6cdd 00:09:06.613 08:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:06.875 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:06.875 00:09:06.875 real 0m17.302s 00:09:06.875 user 0m45.584s 00:09:06.875 sys 0m3.049s 00:09:06.875 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.875 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:06.875 ************************************ 00:09:06.875 END TEST lvs_grow_dirty 00:09:06.875 ************************************ 00:09:06.875 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:06.875 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:06.875 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:06.875 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:06.875 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:06.875 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:06.875 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:06.875 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:06.875 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:07.135 nvmf_trace.0 00:09:07.135 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:07.135 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:07.135 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:07.135 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:07.135 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:07.135 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:07.135 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:07.135 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:07.135 rmmod nvme_tcp 00:09:07.135 rmmod nvme_fabrics 00:09:07.135 rmmod nvme_keyring 00:09:07.135 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:07.135 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:07.135 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:07.135 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1793345 ']' 00:09:07.135 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1793345 00:09:07.136 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1793345 ']' 00:09:07.136 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1793345 00:09:07.136 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:07.136 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.136 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1793345 00:09:07.136 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:07.136 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:07.136 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1793345' 00:09:07.136 killing process with pid 1793345 00:09:07.136 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1793345 00:09:07.136 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1793345 00:09:07.396 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:07.396 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:07.396 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:07.396 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:07.396 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:07.396 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:07.396 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:07.396 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:07.396 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:07.396 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.396 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:07.396 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.307 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:09.307 00:09:09.307 real 0m44.754s 00:09:09.307 user 1m7.688s 00:09:09.307 sys 0m10.659s 00:09:09.307 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.307 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:09.307 ************************************ 00:09:09.307 END TEST nvmf_lvs_grow 00:09:09.307 ************************************ 00:09:09.307 08:08:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:09.307 08:08:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:09.307 08:08:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.307 08:08:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:09.569 ************************************ 00:09:09.569 START TEST nvmf_bdev_io_wait 00:09:09.569 ************************************ 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:09.569 * Looking for test storage... 00:09:09.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:09.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.569 --rc genhtml_branch_coverage=1 00:09:09.569 --rc genhtml_function_coverage=1 00:09:09.569 --rc genhtml_legend=1 00:09:09.569 --rc geninfo_all_blocks=1 00:09:09.569 --rc geninfo_unexecuted_blocks=1 00:09:09.569 00:09:09.569 ' 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:09.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.569 --rc genhtml_branch_coverage=1 00:09:09.569 --rc genhtml_function_coverage=1 00:09:09.569 --rc genhtml_legend=1 00:09:09.569 --rc geninfo_all_blocks=1 00:09:09.569 --rc geninfo_unexecuted_blocks=1 00:09:09.569 00:09:09.569 ' 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:09.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.569 --rc genhtml_branch_coverage=1 00:09:09.569 --rc genhtml_function_coverage=1 00:09:09.569 --rc genhtml_legend=1 00:09:09.569 --rc geninfo_all_blocks=1 00:09:09.569 --rc geninfo_unexecuted_blocks=1 00:09:09.569 00:09:09.569 ' 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:09.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.569 --rc genhtml_branch_coverage=1 00:09:09.569 --rc genhtml_function_coverage=1 00:09:09.569 --rc genhtml_legend=1 00:09:09.569 --rc geninfo_all_blocks=1 00:09:09.569 --rc geninfo_unexecuted_blocks=1 00:09:09.569 00:09:09.569 ' 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.569 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:09.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.570 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.831 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:09.831 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:09.831 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:09.831 08:08:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:18.140 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:18.140 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:18.140 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:18.140 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:18.140 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:18.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:18.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:09:18.140 00:09:18.140 --- 10.0.0.2 ping statistics --- 00:09:18.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.140 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:18.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:18.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:09:18.140 00:09:18.140 --- 10.0.0.1 ping statistics --- 00:09:18.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.140 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1798426 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1798426 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1798426 ']' 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:18.140 08:08:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:18.140 [2024-11-28 08:08:14.437249] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:09:18.140 [2024-11-28 08:08:14.437319] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:18.140 [2024-11-28 08:08:14.538083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:18.140 [2024-11-28 08:08:14.592738] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:18.140 [2024-11-28 08:08:14.592791] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:18.140 [2024-11-28 08:08:14.592800] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:18.140 [2024-11-28 08:08:14.592807] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:18.140 [2024-11-28 08:08:14.592814] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:18.140 [2024-11-28 08:08:14.595232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:18.140 [2024-11-28 08:08:14.595414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:18.140 [2024-11-28 08:08:14.595563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:18.140 [2024-11-28 08:08:14.595564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.140 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:18.140 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:18.140 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:18.140 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:18.140 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:18.140 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:18.140 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:18.140 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.140 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:18.140 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.140 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:18.140 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.140 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:18.140 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.140 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:18.140 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.140 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:18.141 [2024-11-28 08:08:15.391073] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:18.141 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.141 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:18.141 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.141 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:18.141 Malloc0 00:09:18.141 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.141 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:18.141 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.141 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:18.401 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.401 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:18.401 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:18.402 [2024-11-28 08:08:15.456594] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1798603 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1798606 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:18.402 { 00:09:18.402 "params": { 00:09:18.402 "name": "Nvme$subsystem", 00:09:18.402 "trtype": "$TEST_TRANSPORT", 00:09:18.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:18.402 "adrfam": "ipv4", 00:09:18.402 "trsvcid": "$NVMF_PORT", 00:09:18.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:18.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:18.402 "hdgst": ${hdgst:-false}, 00:09:18.402 "ddgst": ${ddgst:-false} 00:09:18.402 }, 00:09:18.402 "method": "bdev_nvme_attach_controller" 00:09:18.402 } 00:09:18.402 EOF 00:09:18.402 )") 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1798609 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:18.402 { 00:09:18.402 "params": { 00:09:18.402 "name": "Nvme$subsystem", 00:09:18.402 "trtype": "$TEST_TRANSPORT", 00:09:18.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:18.402 "adrfam": "ipv4", 00:09:18.402 "trsvcid": "$NVMF_PORT", 00:09:18.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:18.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:18.402 "hdgst": ${hdgst:-false}, 00:09:18.402 "ddgst": ${ddgst:-false} 00:09:18.402 }, 00:09:18.402 "method": "bdev_nvme_attach_controller" 00:09:18.402 } 00:09:18.402 EOF 00:09:18.402 )") 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1798613 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:18.402 { 00:09:18.402 "params": { 00:09:18.402 "name": "Nvme$subsystem", 00:09:18.402 "trtype": "$TEST_TRANSPORT", 00:09:18.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:18.402 "adrfam": "ipv4", 00:09:18.402 "trsvcid": "$NVMF_PORT", 00:09:18.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:18.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:18.402 "hdgst": ${hdgst:-false}, 00:09:18.402 "ddgst": ${ddgst:-false} 00:09:18.402 }, 00:09:18.402 "method": "bdev_nvme_attach_controller" 00:09:18.402 } 00:09:18.402 EOF 00:09:18.402 )") 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:18.402 { 00:09:18.402 "params": { 00:09:18.402 "name": "Nvme$subsystem", 00:09:18.402 "trtype": "$TEST_TRANSPORT", 00:09:18.402 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:18.402 "adrfam": "ipv4", 00:09:18.402 "trsvcid": "$NVMF_PORT", 00:09:18.402 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:18.402 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:18.402 "hdgst": ${hdgst:-false}, 00:09:18.402 "ddgst": ${ddgst:-false} 00:09:18.402 }, 00:09:18.402 "method": "bdev_nvme_attach_controller" 00:09:18.402 } 00:09:18.402 EOF 00:09:18.402 )") 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1798603 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:18.402 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:18.402 "params": { 00:09:18.402 "name": "Nvme1", 00:09:18.402 "trtype": "tcp", 00:09:18.402 "traddr": "10.0.0.2", 00:09:18.402 "adrfam": "ipv4", 00:09:18.402 "trsvcid": "4420", 00:09:18.402 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:18.402 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:18.402 "hdgst": false, 00:09:18.402 "ddgst": false 00:09:18.402 }, 00:09:18.402 "method": "bdev_nvme_attach_controller" 00:09:18.402 }' 00:09:18.403 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:18.403 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:18.403 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:18.403 "params": { 00:09:18.403 "name": "Nvme1", 00:09:18.403 "trtype": "tcp", 00:09:18.403 "traddr": "10.0.0.2", 00:09:18.403 "adrfam": "ipv4", 00:09:18.403 "trsvcid": "4420", 00:09:18.403 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:18.403 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:18.403 "hdgst": false, 00:09:18.403 "ddgst": false 00:09:18.403 }, 00:09:18.403 "method": "bdev_nvme_attach_controller" 00:09:18.403 }' 00:09:18.403 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:18.403 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:18.403 "params": { 00:09:18.403 "name": "Nvme1", 00:09:18.403 "trtype": "tcp", 00:09:18.403 "traddr": "10.0.0.2", 00:09:18.403 "adrfam": "ipv4", 00:09:18.403 "trsvcid": "4420", 00:09:18.403 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:18.403 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:18.403 "hdgst": false, 00:09:18.403 "ddgst": false 00:09:18.403 }, 00:09:18.403 "method": "bdev_nvme_attach_controller" 00:09:18.403 }' 00:09:18.403 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:18.403 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:18.403 "params": { 00:09:18.403 "name": "Nvme1", 00:09:18.403 "trtype": "tcp", 00:09:18.403 "traddr": "10.0.0.2", 00:09:18.403 "adrfam": "ipv4", 00:09:18.403 "trsvcid": "4420", 00:09:18.403 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:18.403 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:18.403 "hdgst": false, 00:09:18.403 "ddgst": false 00:09:18.403 }, 00:09:18.403 "method": "bdev_nvme_attach_controller" 00:09:18.403 }' 00:09:18.403 [2024-11-28 08:08:15.517027] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:09:18.403 [2024-11-28 08:08:15.517092] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:18.403 [2024-11-28 08:08:15.518901] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:09:18.403 [2024-11-28 08:08:15.518983] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:18.403 [2024-11-28 08:08:15.519422] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:09:18.403 [2024-11-28 08:08:15.519481] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:18.403 [2024-11-28 08:08:15.519691] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:09:18.403 [2024-11-28 08:08:15.519759] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:18.664 [2024-11-28 08:08:15.721237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.664 [2024-11-28 08:08:15.762387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:18.664 [2024-11-28 08:08:15.789083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.664 [2024-11-28 08:08:15.827573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:18.664 [2024-11-28 08:08:15.879088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.664 [2024-11-28 08:08:15.919069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:18.925 [2024-11-28 08:08:15.975995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.925 [2024-11-28 08:08:16.017339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:18.925 Running I/O for 1 seconds... 00:09:18.925 Running I/O for 1 seconds... 00:09:18.925 Running I/O for 1 seconds... 00:09:19.186 Running I/O for 1 seconds... 00:09:20.129 11533.00 IOPS, 45.05 MiB/s 00:09:20.129 Latency(us) 00:09:20.129 [2024-11-28T07:08:17.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:20.129 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:20.129 Nvme1n1 : 1.01 11594.13 45.29 0.00 0.00 11002.39 4915.20 15182.51 00:09:20.129 [2024-11-28T07:08:17.418Z] =================================================================================================================== 00:09:20.129 [2024-11-28T07:08:17.418Z] Total : 11594.13 45.29 0.00 0.00 11002.39 4915.20 15182.51 00:09:20.129 9075.00 IOPS, 35.45 MiB/s 00:09:20.129 Latency(us) 00:09:20.129 [2024-11-28T07:08:17.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:20.129 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:20.129 Nvme1n1 : 1.01 9131.68 35.67 0.00 0.00 13958.81 6608.21 22828.37 00:09:20.129 [2024-11-28T07:08:17.418Z] =================================================================================================================== 00:09:20.129 [2024-11-28T07:08:17.418Z] Total : 9131.68 35.67 0.00 0.00 13958.81 6608.21 22828.37 00:09:20.129 9644.00 IOPS, 37.67 MiB/s 00:09:20.129 Latency(us) 00:09:20.129 [2024-11-28T07:08:17.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:20.129 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:20.129 Nvme1n1 : 1.01 9727.39 38.00 0.00 0.00 13112.83 5024.43 23920.64 00:09:20.129 [2024-11-28T07:08:17.418Z] =================================================================================================================== 00:09:20.129 [2024-11-28T07:08:17.418Z] Total : 9727.39 38.00 0.00 0.00 13112.83 5024.43 23920.64 00:09:20.129 176960.00 IOPS, 691.25 MiB/s 00:09:20.129 Latency(us) 00:09:20.129 [2024-11-28T07:08:17.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:20.129 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:20.129 Nvme1n1 : 1.00 176609.31 689.88 0.00 0.00 720.67 300.37 1966.08 00:09:20.129 [2024-11-28T07:08:17.418Z] =================================================================================================================== 00:09:20.129 [2024-11-28T07:08:17.418Z] Total : 176609.31 689.88 0.00 0.00 720.67 300.37 1966.08 00:09:20.129 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1798606 00:09:20.129 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1798609 00:09:20.130 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1798613 00:09:20.130 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:20.130 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.130 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:20.130 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.130 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:20.130 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:20.130 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:20.130 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:20.130 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:20.130 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:20.130 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:20.130 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:20.130 rmmod nvme_tcp 00:09:20.130 rmmod nvme_fabrics 00:09:20.391 rmmod nvme_keyring 00:09:20.391 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:20.391 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:20.391 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:20.391 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1798426 ']' 00:09:20.391 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1798426 00:09:20.391 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1798426 ']' 00:09:20.391 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1798426 00:09:20.391 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:20.391 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:20.391 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1798426 00:09:20.391 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:20.391 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:20.391 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1798426' 00:09:20.391 killing process with pid 1798426 00:09:20.391 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1798426 00:09:20.391 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1798426 00:09:20.391 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:20.391 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:20.391 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:20.391 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:20.391 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:20.391 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:20.391 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:20.653 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:20.653 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:20.653 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.653 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:20.653 08:08:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.569 08:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:22.569 00:09:22.569 real 0m13.148s 00:09:22.569 user 0m19.741s 00:09:22.569 sys 0m7.534s 00:09:22.569 08:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.569 08:08:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:22.569 ************************************ 00:09:22.569 END TEST nvmf_bdev_io_wait 00:09:22.569 ************************************ 00:09:22.569 08:08:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:22.569 08:08:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:22.569 08:08:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.569 08:08:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:22.569 ************************************ 00:09:22.569 START TEST nvmf_queue_depth 00:09:22.569 ************************************ 00:09:22.569 08:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:22.830 * Looking for test storage... 00:09:22.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:22.830 08:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:22.830 08:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:22.830 08:08:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:22.830 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:22.830 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:22.830 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:22.830 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:22.830 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.830 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:22.830 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:22.830 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:22.830 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:22.830 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:22.830 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:22.830 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:22.830 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:22.830 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:22.830 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:22.830 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.830 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:22.830 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:22.830 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.830 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:22.830 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:22.830 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:22.830 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:22.830 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.830 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:22.830 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:22.830 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:22.830 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:22.830 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:22.830 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.830 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:22.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.830 --rc genhtml_branch_coverage=1 00:09:22.830 --rc genhtml_function_coverage=1 00:09:22.830 --rc genhtml_legend=1 00:09:22.830 --rc geninfo_all_blocks=1 00:09:22.830 --rc geninfo_unexecuted_blocks=1 00:09:22.830 00:09:22.830 ' 00:09:22.830 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:22.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.830 --rc genhtml_branch_coverage=1 00:09:22.830 --rc genhtml_function_coverage=1 00:09:22.831 --rc genhtml_legend=1 00:09:22.831 --rc geninfo_all_blocks=1 00:09:22.831 --rc geninfo_unexecuted_blocks=1 00:09:22.831 00:09:22.831 ' 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:22.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.831 --rc genhtml_branch_coverage=1 00:09:22.831 --rc genhtml_function_coverage=1 00:09:22.831 --rc genhtml_legend=1 00:09:22.831 --rc geninfo_all_blocks=1 00:09:22.831 --rc geninfo_unexecuted_blocks=1 00:09:22.831 00:09:22.831 ' 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:22.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.831 --rc genhtml_branch_coverage=1 00:09:22.831 --rc genhtml_function_coverage=1 00:09:22.831 --rc genhtml_legend=1 00:09:22.831 --rc geninfo_all_blocks=1 00:09:22.831 --rc geninfo_unexecuted_blocks=1 00:09:22.831 00:09:22.831 ' 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:22.831 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:22.831 08:08:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:30.975 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:30.975 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:30.975 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:30.975 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:30.975 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:30.975 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:09:30.975 00:09:30.975 --- 10.0.0.2 ping statistics --- 00:09:30.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.975 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:30.975 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:30.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:09:30.975 00:09:30.975 --- 10.0.0.1 ping statistics --- 00:09:30.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.975 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:09:30.975 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:30.976 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:30.976 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:30.976 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:30.976 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:30.976 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:30.976 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:30.976 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:30.976 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:30.976 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:30.976 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:30.976 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:30.976 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:30.976 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1803207 00:09:30.976 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1803207 00:09:30.976 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:30.976 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1803207 ']' 00:09:30.976 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.976 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.976 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.976 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.976 08:08:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:30.976 [2024-11-28 08:08:27.678491] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:09:30.976 [2024-11-28 08:08:27.678557] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.976 [2024-11-28 08:08:27.779780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.976 [2024-11-28 08:08:27.830775] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:30.976 [2024-11-28 08:08:27.830828] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:30.976 [2024-11-28 08:08:27.830836] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:30.976 [2024-11-28 08:08:27.830843] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:30.976 [2024-11-28 08:08:27.830855] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:30.976 [2024-11-28 08:08:27.831634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.236 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.236 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:31.236 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:31.236 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:31.236 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:31.497 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.497 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:31.497 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.497 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:31.497 [2024-11-28 08:08:28.538340] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:31.497 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.497 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:31.497 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.497 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:31.497 Malloc0 00:09:31.497 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.497 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:31.497 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.497 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:31.497 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.497 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:31.497 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.497 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:31.497 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.497 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:31.497 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.497 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:31.497 [2024-11-28 08:08:28.599481] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:31.497 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.497 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1803511 00:09:31.497 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:31.497 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:31.497 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1803511 /var/tmp/bdevperf.sock 00:09:31.497 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1803511 ']' 00:09:31.497 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:31.497 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.497 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:31.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:31.497 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.497 08:08:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:31.497 [2024-11-28 08:08:28.658585] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:09:31.497 [2024-11-28 08:08:28.658649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1803511 ] 00:09:31.497 [2024-11-28 08:08:28.750151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.758 [2024-11-28 08:08:28.803433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.331 08:08:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.331 08:08:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:32.331 08:08:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:32.331 08:08:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.331 08:08:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:32.331 NVMe0n1 00:09:32.331 08:08:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.331 08:08:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:32.591 Running I/O for 10 seconds... 00:09:34.475 10251.00 IOPS, 40.04 MiB/s [2024-11-28T07:08:32.706Z] 10830.00 IOPS, 42.30 MiB/s [2024-11-28T07:08:34.095Z] 11185.00 IOPS, 43.69 MiB/s [2024-11-28T07:08:34.665Z] 11267.50 IOPS, 44.01 MiB/s [2024-11-28T07:08:36.049Z] 11654.20 IOPS, 45.52 MiB/s [2024-11-28T07:08:36.991Z] 11942.17 IOPS, 46.65 MiB/s [2024-11-28T07:08:37.935Z] 12110.00 IOPS, 47.30 MiB/s [2024-11-28T07:08:38.876Z] 12224.62 IOPS, 47.75 MiB/s [2024-11-28T07:08:39.819Z] 12371.78 IOPS, 48.33 MiB/s [2024-11-28T07:08:39.819Z] 12475.50 IOPS, 48.73 MiB/s 00:09:42.530 Latency(us) 00:09:42.530 [2024-11-28T07:08:39.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.530 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:42.530 Verification LBA range: start 0x0 length 0x4000 00:09:42.530 NVMe0n1 : 10.06 12493.26 48.80 0.00 0.00 81660.64 24139.09 74711.04 00:09:42.530 [2024-11-28T07:08:39.819Z] =================================================================================================================== 00:09:42.530 [2024-11-28T07:08:39.819Z] Total : 12493.26 48.80 0.00 0.00 81660.64 24139.09 74711.04 00:09:42.530 { 00:09:42.530 "results": [ 00:09:42.530 { 00:09:42.530 "job": "NVMe0n1", 00:09:42.530 "core_mask": "0x1", 00:09:42.530 "workload": "verify", 00:09:42.530 "status": "finished", 00:09:42.530 "verify_range": { 00:09:42.530 "start": 0, 00:09:42.530 "length": 16384 00:09:42.530 }, 00:09:42.530 "queue_depth": 1024, 00:09:42.530 "io_size": 4096, 00:09:42.530 "runtime": 10.060627, 00:09:42.530 "iops": 12493.25712999796, 00:09:42.530 "mibps": 48.801785664054535, 00:09:42.530 "io_failed": 0, 00:09:42.530 "io_timeout": 0, 00:09:42.530 "avg_latency_us": 81660.64076293528, 00:09:42.530 "min_latency_us": 24139.093333333334, 00:09:42.530 "max_latency_us": 74711.04 00:09:42.530 } 00:09:42.530 ], 00:09:42.530 "core_count": 1 00:09:42.530 } 00:09:42.530 08:08:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1803511 00:09:42.530 08:08:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1803511 ']' 00:09:42.530 08:08:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1803511 00:09:42.530 08:08:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:42.530 08:08:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.530 08:08:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1803511 00:09:42.790 08:08:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:42.790 08:08:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:42.790 08:08:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1803511' 00:09:42.790 killing process with pid 1803511 00:09:42.790 08:08:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1803511 00:09:42.790 Received shutdown signal, test time was about 10.000000 seconds 00:09:42.790 00:09:42.790 Latency(us) 00:09:42.790 [2024-11-28T07:08:40.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.790 [2024-11-28T07:08:40.079Z] =================================================================================================================== 00:09:42.790 [2024-11-28T07:08:40.079Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:42.790 08:08:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1803511 00:09:42.790 08:08:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:42.790 08:08:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:42.790 08:08:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:42.790 08:08:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:42.790 08:08:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:42.790 08:08:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:42.790 08:08:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:42.790 08:08:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:42.790 rmmod nvme_tcp 00:09:42.790 rmmod nvme_fabrics 00:09:42.790 rmmod nvme_keyring 00:09:42.790 08:08:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:42.790 08:08:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:42.790 08:08:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:42.790 08:08:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1803207 ']' 00:09:42.790 08:08:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1803207 00:09:42.790 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1803207 ']' 00:09:42.790 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1803207 00:09:42.790 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:42.790 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.790 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1803207 00:09:42.790 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:42.790 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:42.791 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1803207' 00:09:42.791 killing process with pid 1803207 00:09:42.791 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1803207 00:09:42.791 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1803207 00:09:43.052 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:43.052 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:43.052 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:43.052 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:43.052 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:43.052 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:43.052 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:43.052 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:43.052 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:43.052 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.052 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.052 08:08:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:45.601 00:09:45.601 real 0m22.424s 00:09:45.601 user 0m25.672s 00:09:45.601 sys 0m7.005s 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:45.601 ************************************ 00:09:45.601 END TEST nvmf_queue_depth 00:09:45.601 ************************************ 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:45.601 ************************************ 00:09:45.601 START TEST nvmf_target_multipath 00:09:45.601 ************************************ 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:45.601 * Looking for test storage... 00:09:45.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:45.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.601 --rc genhtml_branch_coverage=1 00:09:45.601 --rc genhtml_function_coverage=1 00:09:45.601 --rc genhtml_legend=1 00:09:45.601 --rc geninfo_all_blocks=1 00:09:45.601 --rc geninfo_unexecuted_blocks=1 00:09:45.601 00:09:45.601 ' 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:45.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.601 --rc genhtml_branch_coverage=1 00:09:45.601 --rc genhtml_function_coverage=1 00:09:45.601 --rc genhtml_legend=1 00:09:45.601 --rc geninfo_all_blocks=1 00:09:45.601 --rc geninfo_unexecuted_blocks=1 00:09:45.601 00:09:45.601 ' 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:45.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.601 --rc genhtml_branch_coverage=1 00:09:45.601 --rc genhtml_function_coverage=1 00:09:45.601 --rc genhtml_legend=1 00:09:45.601 --rc geninfo_all_blocks=1 00:09:45.601 --rc geninfo_unexecuted_blocks=1 00:09:45.601 00:09:45.601 ' 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:45.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.601 --rc genhtml_branch_coverage=1 00:09:45.601 --rc genhtml_function_coverage=1 00:09:45.601 --rc genhtml_legend=1 00:09:45.601 --rc geninfo_all_blocks=1 00:09:45.601 --rc geninfo_unexecuted_blocks=1 00:09:45.601 00:09:45.601 ' 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.601 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:45.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:45.602 08:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:53.743 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:53.743 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:53.743 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:53.743 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:53.743 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:53.743 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:53.743 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:53.743 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:53.743 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:53.744 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:53.744 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:53.744 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:53.744 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:53.744 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:53.745 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:53.745 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:53.745 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:53.745 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:53.745 08:08:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:53.745 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:53.745 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:09:53.745 00:09:53.745 --- 10.0.0.2 ping statistics --- 00:09:53.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.745 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:53.745 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:53.745 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:09:53.745 00:09:53.745 --- 10.0.0.1 ping statistics --- 00:09:53.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.745 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:53.745 only one NIC for nvmf test 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:53.745 rmmod nvme_tcp 00:09:53.745 rmmod nvme_fabrics 00:09:53.745 rmmod nvme_keyring 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.745 08:08:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.131 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:55.131 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:55.131 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:55.131 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:55.131 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:55.131 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:55.131 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:55.131 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:55.131 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:55.131 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:55.131 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:55.131 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:55.131 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:55.131 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:55.131 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:55.131 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:55.131 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:55.131 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:55.131 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:55.131 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:55.131 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:55.131 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:55.131 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.131 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.131 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.131 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:55.131 00:09:55.131 real 0m9.967s 00:09:55.131 user 0m2.290s 00:09:55.131 sys 0m5.629s 00:09:55.131 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.131 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:55.131 ************************************ 00:09:55.131 END TEST nvmf_target_multipath 00:09:55.131 ************************************ 00:09:55.131 08:08:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:55.131 08:08:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:55.131 08:08:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.131 08:08:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:55.131 ************************************ 00:09:55.131 START TEST nvmf_zcopy 00:09:55.131 ************************************ 00:09:55.131 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:55.393 * Looking for test storage... 00:09:55.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:55.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.394 --rc genhtml_branch_coverage=1 00:09:55.394 --rc genhtml_function_coverage=1 00:09:55.394 --rc genhtml_legend=1 00:09:55.394 --rc geninfo_all_blocks=1 00:09:55.394 --rc geninfo_unexecuted_blocks=1 00:09:55.394 00:09:55.394 ' 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:55.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.394 --rc genhtml_branch_coverage=1 00:09:55.394 --rc genhtml_function_coverage=1 00:09:55.394 --rc genhtml_legend=1 00:09:55.394 --rc geninfo_all_blocks=1 00:09:55.394 --rc geninfo_unexecuted_blocks=1 00:09:55.394 00:09:55.394 ' 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:55.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.394 --rc genhtml_branch_coverage=1 00:09:55.394 --rc genhtml_function_coverage=1 00:09:55.394 --rc genhtml_legend=1 00:09:55.394 --rc geninfo_all_blocks=1 00:09:55.394 --rc geninfo_unexecuted_blocks=1 00:09:55.394 00:09:55.394 ' 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:55.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.394 --rc genhtml_branch_coverage=1 00:09:55.394 --rc genhtml_function_coverage=1 00:09:55.394 --rc genhtml_legend=1 00:09:55.394 --rc geninfo_all_blocks=1 00:09:55.394 --rc geninfo_unexecuted_blocks=1 00:09:55.394 00:09:55.394 ' 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:55.394 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:55.394 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:55.395 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:55.395 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.395 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.395 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.395 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:55.395 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:55.395 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:55.395 08:08:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:03.543 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:03.543 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.543 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:03.544 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:03.544 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.544 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:03.544 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:03.544 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.544 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:03.544 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.544 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:03.544 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.544 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:03.544 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:03.544 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.544 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:03.544 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:03.544 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.544 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:03.544 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:03.544 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:03.544 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:03.544 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:03.544 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:03.544 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:03.544 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:03.544 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:03.544 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:03.544 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:03.544 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:03.544 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:03.544 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:03.544 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:03.544 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:03.544 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:03.544 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:03.544 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:03.544 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:03.544 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:03.544 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:03.544 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:03.544 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:03.544 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:03.544 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:03.544 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:03.544 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:03.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:03.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:10:03.544 00:10:03.544 --- 10.0.0.2 ping statistics --- 00:10:03.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.544 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:10:03.544 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:03.544 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:03.544 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:10:03.544 00:10:03.544 --- 10.0.0.1 ping statistics --- 00:10:03.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.544 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:10:03.544 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:03.544 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:03.544 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:03.544 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:03.544 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:03.544 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:03.544 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:03.544 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:03.544 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:03.544 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:03.544 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:03.544 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:03.544 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:03.544 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1814217 00:10:03.544 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1814217 00:10:03.544 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:03.544 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1814217 ']' 00:10:03.544 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.544 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.544 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.544 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.544 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:03.544 [2024-11-28 08:09:00.247959] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:10:03.544 [2024-11-28 08:09:00.248031] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.544 [2024-11-28 08:09:00.350532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.544 [2024-11-28 08:09:00.401139] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:03.544 [2024-11-28 08:09:00.401199] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:03.544 [2024-11-28 08:09:00.401209] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:03.544 [2024-11-28 08:09:00.401216] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:03.544 [2024-11-28 08:09:00.401222] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:03.544 [2024-11-28 08:09:00.401948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.805 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:03.805 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:03.805 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:03.805 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:03.806 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.065 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:04.065 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:04.065 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:04.065 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.065 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.065 [2024-11-28 08:09:01.121315] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:04.065 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.065 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:04.065 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.065 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.065 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.065 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:04.065 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.065 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.065 [2024-11-28 08:09:01.145576] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:04.065 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.065 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:04.065 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.065 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.065 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.065 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:04.066 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.066 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.066 malloc0 00:10:04.066 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.066 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:04.066 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.066 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.066 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.066 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:04.066 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:04.066 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:04.066 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:04.066 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:04.066 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:04.066 { 00:10:04.066 "params": { 00:10:04.066 "name": "Nvme$subsystem", 00:10:04.066 "trtype": "$TEST_TRANSPORT", 00:10:04.066 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:04.066 "adrfam": "ipv4", 00:10:04.066 "trsvcid": "$NVMF_PORT", 00:10:04.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:04.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:04.066 "hdgst": ${hdgst:-false}, 00:10:04.066 "ddgst": ${ddgst:-false} 00:10:04.066 }, 00:10:04.066 "method": "bdev_nvme_attach_controller" 00:10:04.066 } 00:10:04.066 EOF 00:10:04.066 )") 00:10:04.066 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:04.066 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:04.066 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:04.066 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:04.066 "params": { 00:10:04.066 "name": "Nvme1", 00:10:04.066 "trtype": "tcp", 00:10:04.066 "traddr": "10.0.0.2", 00:10:04.066 "adrfam": "ipv4", 00:10:04.066 "trsvcid": "4420", 00:10:04.066 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:04.066 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:04.066 "hdgst": false, 00:10:04.066 "ddgst": false 00:10:04.066 }, 00:10:04.066 "method": "bdev_nvme_attach_controller" 00:10:04.066 }' 00:10:04.066 [2024-11-28 08:09:01.247272] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:10:04.066 [2024-11-28 08:09:01.247340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1814537 ] 00:10:04.066 [2024-11-28 08:09:01.339429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.326 [2024-11-28 08:09:01.394799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.587 Running I/O for 10 seconds... 00:10:06.470 6537.00 IOPS, 51.07 MiB/s [2024-11-28T07:09:05.141Z] 8115.00 IOPS, 63.40 MiB/s [2024-11-28T07:09:06.078Z] 8670.33 IOPS, 67.74 MiB/s [2024-11-28T07:09:07.018Z] 8947.75 IOPS, 69.90 MiB/s [2024-11-28T07:09:07.960Z] 9115.40 IOPS, 71.21 MiB/s [2024-11-28T07:09:08.901Z] 9231.67 IOPS, 72.12 MiB/s [2024-11-28T07:09:09.844Z] 9310.86 IOPS, 72.74 MiB/s [2024-11-28T07:09:10.895Z] 9368.50 IOPS, 73.19 MiB/s [2024-11-28T07:09:11.857Z] 9414.00 IOPS, 73.55 MiB/s [2024-11-28T07:09:11.857Z] 9450.10 IOPS, 73.83 MiB/s 00:10:14.568 Latency(us) 00:10:14.568 [2024-11-28T07:09:11.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:14.568 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:14.568 Verification LBA range: start 0x0 length 0x1000 00:10:14.568 Nvme1n1 : 10.01 9453.27 73.85 0.00 0.00 13494.35 2102.61 27852.80 00:10:14.568 [2024-11-28T07:09:11.857Z] =================================================================================================================== 00:10:14.568 [2024-11-28T07:09:11.857Z] Total : 9453.27 73.85 0.00 0.00 13494.35 2102.61 27852.80 00:10:14.829 08:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1817149 00:10:14.829 08:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:14.829 08:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:14.829 08:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:14.829 08:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:14.829 08:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:14.829 08:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:14.829 08:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:14.829 08:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:14.829 { 00:10:14.829 "params": { 00:10:14.829 "name": "Nvme$subsystem", 00:10:14.829 "trtype": "$TEST_TRANSPORT", 00:10:14.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:14.829 "adrfam": "ipv4", 00:10:14.829 "trsvcid": "$NVMF_PORT", 00:10:14.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:14.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:14.829 "hdgst": ${hdgst:-false}, 00:10:14.829 "ddgst": ${ddgst:-false} 00:10:14.829 }, 00:10:14.829 "method": "bdev_nvme_attach_controller" 00:10:14.829 } 00:10:14.829 EOF 00:10:14.829 )") 00:10:14.829 [2024-11-28 08:09:11.867688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.829 [2024-11-28 08:09:11.867715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.829 08:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:14.829 08:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:14.829 08:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:14.829 08:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:14.829 "params": { 00:10:14.829 "name": "Nvme1", 00:10:14.829 "trtype": "tcp", 00:10:14.829 "traddr": "10.0.0.2", 00:10:14.829 "adrfam": "ipv4", 00:10:14.829 "trsvcid": "4420", 00:10:14.829 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:14.829 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:14.829 "hdgst": false, 00:10:14.829 "ddgst": false 00:10:14.829 }, 00:10:14.829 "method": "bdev_nvme_attach_controller" 00:10:14.829 }' 00:10:14.829 [2024-11-28 08:09:11.879688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.829 [2024-11-28 08:09:11.879696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.829 [2024-11-28 08:09:11.891718] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.829 [2024-11-28 08:09:11.891725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.829 [2024-11-28 08:09:11.903748] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.829 [2024-11-28 08:09:11.903755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.829 [2024-11-28 08:09:11.913882] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:10:14.829 [2024-11-28 08:09:11.913940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1817149 ] 00:10:14.829 [2024-11-28 08:09:11.915779] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.829 [2024-11-28 08:09:11.915787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.829 [2024-11-28 08:09:11.927811] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.829 [2024-11-28 08:09:11.927818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.829 [2024-11-28 08:09:11.939843] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.829 [2024-11-28 08:09:11.939850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.829 [2024-11-28 08:09:11.951872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.829 [2024-11-28 08:09:11.951879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.829 [2024-11-28 08:09:11.963902] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.829 [2024-11-28 08:09:11.963910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.829 [2024-11-28 08:09:11.975933] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.829 [2024-11-28 08:09:11.975940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.829 [2024-11-28 08:09:11.987963] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.829 [2024-11-28 08:09:11.987970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.829 [2024-11-28 08:09:11.998112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.829 [2024-11-28 08:09:11.999995] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.829 [2024-11-28 08:09:12.000003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.829 [2024-11-28 08:09:12.012028] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.829 [2024-11-28 08:09:12.012037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.829 [2024-11-28 08:09:12.024058] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.829 [2024-11-28 08:09:12.024067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.829 [2024-11-28 08:09:12.027125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.829 [2024-11-28 08:09:12.036088] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.829 [2024-11-28 08:09:12.036095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.829 [2024-11-28 08:09:12.048123] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.829 [2024-11-28 08:09:12.048137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.829 [2024-11-28 08:09:12.060153] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.829 [2024-11-28 08:09:12.060167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.829 [2024-11-28 08:09:12.072186] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.829 [2024-11-28 08:09:12.072195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.829 [2024-11-28 08:09:12.084214] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.829 [2024-11-28 08:09:12.084221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.829 [2024-11-28 08:09:12.096263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.829 [2024-11-28 08:09:12.096276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:14.829 [2024-11-28 08:09:12.108288] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:14.829 [2024-11-28 08:09:12.108298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.090 [2024-11-28 08:09:12.120318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.090 [2024-11-28 08:09:12.120327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.090 [2024-11-28 08:09:12.132348] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.090 [2024-11-28 08:09:12.132358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.090 [2024-11-28 08:09:12.144377] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.090 [2024-11-28 08:09:12.144386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.090 [2024-11-28 08:09:12.156409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.090 [2024-11-28 08:09:12.156416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.090 [2024-11-28 08:09:12.168439] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.090 [2024-11-28 08:09:12.168446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.090 [2024-11-28 08:09:12.180473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.090 [2024-11-28 08:09:12.180482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.090 [2024-11-28 08:09:12.192503] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.090 [2024-11-28 08:09:12.192510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.090 [2024-11-28 08:09:12.204534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.090 [2024-11-28 08:09:12.204540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.090 [2024-11-28 08:09:12.216563] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.090 [2024-11-28 08:09:12.216570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.090 [2024-11-28 08:09:12.228596] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.090 [2024-11-28 08:09:12.228605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.090 [2024-11-28 08:09:12.240627] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.090 [2024-11-28 08:09:12.240634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.090 [2024-11-28 08:09:12.252659] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.090 [2024-11-28 08:09:12.252666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.090 [2024-11-28 08:09:12.264692] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.090 [2024-11-28 08:09:12.264699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.090 [2024-11-28 08:09:12.276729] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.090 [2024-11-28 08:09:12.276742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.090 [2024-11-28 08:09:12.319676] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.090 [2024-11-28 08:09:12.319687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.090 [2024-11-28 08:09:12.328861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.090 [2024-11-28 08:09:12.328870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.090 Running I/O for 5 seconds... 00:10:15.090 [2024-11-28 08:09:12.344837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.090 [2024-11-28 08:09:12.344853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.090 [2024-11-28 08:09:12.357831] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.090 [2024-11-28 08:09:12.357847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.090 [2024-11-28 08:09:12.370674] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.090 [2024-11-28 08:09:12.370689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.352 [2024-11-28 08:09:12.383229] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.352 [2024-11-28 08:09:12.383244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.352 [2024-11-28 08:09:12.395918] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.352 [2024-11-28 08:09:12.395932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.352 [2024-11-28 08:09:12.409210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.352 [2024-11-28 08:09:12.409225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.352 [2024-11-28 08:09:12.422293] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.352 [2024-11-28 08:09:12.422307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.352 [2024-11-28 08:09:12.435218] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.352 [2024-11-28 08:09:12.435233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.352 [2024-11-28 08:09:12.448115] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.352 [2024-11-28 08:09:12.448130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.352 [2024-11-28 08:09:12.461061] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.352 [2024-11-28 08:09:12.461075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.352 [2024-11-28 08:09:12.474490] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.352 [2024-11-28 08:09:12.474505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.352 [2024-11-28 08:09:12.487900] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.352 [2024-11-28 08:09:12.487915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.352 [2024-11-28 08:09:12.501450] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.352 [2024-11-28 08:09:12.501464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.352 [2024-11-28 08:09:12.514750] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.352 [2024-11-28 08:09:12.514765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.352 [2024-11-28 08:09:12.528268] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.352 [2024-11-28 08:09:12.528282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.352 [2024-11-28 08:09:12.540688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.352 [2024-11-28 08:09:12.540702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.352 [2024-11-28 08:09:12.554138] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.352 [2024-11-28 08:09:12.554152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.352 [2024-11-28 08:09:12.566727] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.352 [2024-11-28 08:09:12.566741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.352 [2024-11-28 08:09:12.579302] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.352 [2024-11-28 08:09:12.579316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.352 [2024-11-28 08:09:12.592216] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.352 [2024-11-28 08:09:12.592230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.352 [2024-11-28 08:09:12.605577] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.352 [2024-11-28 08:09:12.605592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.352 [2024-11-28 08:09:12.619133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.352 [2024-11-28 08:09:12.619147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.352 [2024-11-28 08:09:12.632016] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.352 [2024-11-28 08:09:12.632030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.613 [2024-11-28 08:09:12.645469] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.613 [2024-11-28 08:09:12.645484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.613 [2024-11-28 08:09:12.659293] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.613 [2024-11-28 08:09:12.659307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.613 [2024-11-28 08:09:12.672307] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.613 [2024-11-28 08:09:12.672321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.613 [2024-11-28 08:09:12.684959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.613 [2024-11-28 08:09:12.684974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.613 [2024-11-28 08:09:12.698203] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.613 [2024-11-28 08:09:12.698218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.613 [2024-11-28 08:09:12.711176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.613 [2024-11-28 08:09:12.711190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.613 [2024-11-28 08:09:12.724294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.613 [2024-11-28 08:09:12.724309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.613 [2024-11-28 08:09:12.737540] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.613 [2024-11-28 08:09:12.737556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.613 [2024-11-28 08:09:12.750643] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.613 [2024-11-28 08:09:12.750657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.613 [2024-11-28 08:09:12.763856] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.613 [2024-11-28 08:09:12.763871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.613 [2024-11-28 08:09:12.777242] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.613 [2024-11-28 08:09:12.777257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.613 [2024-11-28 08:09:12.790509] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.613 [2024-11-28 08:09:12.790524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.613 [2024-11-28 08:09:12.803935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.613 [2024-11-28 08:09:12.803950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.613 [2024-11-28 08:09:12.817091] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.613 [2024-11-28 08:09:12.817107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.613 [2024-11-28 08:09:12.829796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.613 [2024-11-28 08:09:12.829811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.613 [2024-11-28 08:09:12.843087] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.613 [2024-11-28 08:09:12.843102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.613 [2024-11-28 08:09:12.856425] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.613 [2024-11-28 08:09:12.856439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.613 [2024-11-28 08:09:12.869748] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.613 [2024-11-28 08:09:12.869762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.613 [2024-11-28 08:09:12.883219] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.613 [2024-11-28 08:09:12.883235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.614 [2024-11-28 08:09:12.896320] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.614 [2024-11-28 08:09:12.896335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.874 [2024-11-28 08:09:12.909429] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.874 [2024-11-28 08:09:12.909445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.874 [2024-11-28 08:09:12.922214] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.874 [2024-11-28 08:09:12.922229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.874 [2024-11-28 08:09:12.935442] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.874 [2024-11-28 08:09:12.935457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.874 [2024-11-28 08:09:12.948950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.874 [2024-11-28 08:09:12.948965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.874 [2024-11-28 08:09:12.961733] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.874 [2024-11-28 08:09:12.961748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.874 [2024-11-28 08:09:12.974236] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.874 [2024-11-28 08:09:12.974255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.874 [2024-11-28 08:09:12.987604] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.874 [2024-11-28 08:09:12.987619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.874 [2024-11-28 08:09:13.000529] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.874 [2024-11-28 08:09:13.000544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.874 [2024-11-28 08:09:13.013826] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.874 [2024-11-28 08:09:13.013841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.874 [2024-11-28 08:09:13.026308] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.874 [2024-11-28 08:09:13.026324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.874 [2024-11-28 08:09:13.039243] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.874 [2024-11-28 08:09:13.039258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.874 [2024-11-28 08:09:13.052782] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.874 [2024-11-28 08:09:13.052797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.874 [2024-11-28 08:09:13.066125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.874 [2024-11-28 08:09:13.066140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.874 [2024-11-28 08:09:13.079043] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.874 [2024-11-28 08:09:13.079058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.874 [2024-11-28 08:09:13.092458] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.874 [2024-11-28 08:09:13.092473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.874 [2024-11-28 08:09:13.105407] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.874 [2024-11-28 08:09:13.105422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.874 [2024-11-28 08:09:13.118815] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.874 [2024-11-28 08:09:13.118830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.874 [2024-11-28 08:09:13.131432] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.874 [2024-11-28 08:09:13.131446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.874 [2024-11-28 08:09:13.144581] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.874 [2024-11-28 08:09:13.144597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:15.874 [2024-11-28 08:09:13.157645] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:15.874 [2024-11-28 08:09:13.157660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.136 [2024-11-28 08:09:13.170350] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.136 [2024-11-28 08:09:13.170365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.136 [2024-11-28 08:09:13.182938] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.136 [2024-11-28 08:09:13.182953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.136 [2024-11-28 08:09:13.196347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.136 [2024-11-28 08:09:13.196362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.136 [2024-11-28 08:09:13.209684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.136 [2024-11-28 08:09:13.209698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.136 [2024-11-28 08:09:13.221989] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.136 [2024-11-28 08:09:13.222011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.136 [2024-11-28 08:09:13.235399] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.136 [2024-11-28 08:09:13.235414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.136 [2024-11-28 08:09:13.248399] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.136 [2024-11-28 08:09:13.248414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.136 [2024-11-28 08:09:13.261657] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.136 [2024-11-28 08:09:13.261671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.136 [2024-11-28 08:09:13.274934] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.136 [2024-11-28 08:09:13.274949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.136 [2024-11-28 08:09:13.288602] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.136 [2024-11-28 08:09:13.288616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.136 [2024-11-28 08:09:13.301847] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.136 [2024-11-28 08:09:13.301861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.136 [2024-11-28 08:09:13.314527] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.136 [2024-11-28 08:09:13.314542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.136 [2024-11-28 08:09:13.327259] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.136 [2024-11-28 08:09:13.327274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.137 19205.00 IOPS, 150.04 MiB/s [2024-11-28T07:09:13.426Z] [2024-11-28 08:09:13.340486] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.137 [2024-11-28 08:09:13.340501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.137 [2024-11-28 08:09:13.353713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.137 [2024-11-28 08:09:13.353728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.137 [2024-11-28 08:09:13.367388] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.137 [2024-11-28 08:09:13.367403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.137 [2024-11-28 08:09:13.380628] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.137 [2024-11-28 08:09:13.380643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.137 [2024-11-28 08:09:13.394166] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.137 [2024-11-28 08:09:13.394181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.137 [2024-11-28 08:09:13.407646] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.137 [2024-11-28 08:09:13.407661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.137 [2024-11-28 08:09:13.421323] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.137 [2024-11-28 08:09:13.421338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.397 [2024-11-28 08:09:13.434444] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.397 [2024-11-28 08:09:13.434459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.397 [2024-11-28 08:09:13.447649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.397 [2024-11-28 08:09:13.447664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.397 [2024-11-28 08:09:13.460336] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.397 [2024-11-28 08:09:13.460351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.398 [2024-11-28 08:09:13.472747] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.398 [2024-11-28 08:09:13.472766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.398 [2024-11-28 08:09:13.485286] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.398 [2024-11-28 08:09:13.485300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.398 [2024-11-28 08:09:13.498334] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.398 [2024-11-28 08:09:13.498348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.398 [2024-11-28 08:09:13.511644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.398 [2024-11-28 08:09:13.511658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.398 [2024-11-28 08:09:13.524959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.398 [2024-11-28 08:09:13.524974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.398 [2024-11-28 08:09:13.537510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.398 [2024-11-28 08:09:13.537523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.398 [2024-11-28 08:09:13.550291] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.398 [2024-11-28 08:09:13.550305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.398 [2024-11-28 08:09:13.563266] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.398 [2024-11-28 08:09:13.563281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.398 [2024-11-28 08:09:13.576495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.398 [2024-11-28 08:09:13.576509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.398 [2024-11-28 08:09:13.589876] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.398 [2024-11-28 08:09:13.589891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.398 [2024-11-28 08:09:13.602816] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.398 [2024-11-28 08:09:13.602830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.398 [2024-11-28 08:09:13.616149] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.398 [2024-11-28 08:09:13.616169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.398 [2024-11-28 08:09:13.629937] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.398 [2024-11-28 08:09:13.629951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.398 [2024-11-28 08:09:13.643291] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.398 [2024-11-28 08:09:13.643305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.398 [2024-11-28 08:09:13.656890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.398 [2024-11-28 08:09:13.656905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.398 [2024-11-28 08:09:13.669730] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.398 [2024-11-28 08:09:13.669744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.398 [2024-11-28 08:09:13.682796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.398 [2024-11-28 08:09:13.682810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.659 [2024-11-28 08:09:13.696113] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.659 [2024-11-28 08:09:13.696128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.659 [2024-11-28 08:09:13.709502] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.659 [2024-11-28 08:09:13.709516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.659 [2024-11-28 08:09:13.723032] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.659 [2024-11-28 08:09:13.723046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.659 [2024-11-28 08:09:13.735558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.659 [2024-11-28 08:09:13.735572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.659 [2024-11-28 08:09:13.748387] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.659 [2024-11-28 08:09:13.748401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.659 [2024-11-28 08:09:13.761170] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.659 [2024-11-28 08:09:13.761185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.659 [2024-11-28 08:09:13.773837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.659 [2024-11-28 08:09:13.773851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.659 [2024-11-28 08:09:13.787288] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.659 [2024-11-28 08:09:13.787302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.659 [2024-11-28 08:09:13.800642] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.659 [2024-11-28 08:09:13.800656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.659 [2024-11-28 08:09:13.813917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.659 [2024-11-28 08:09:13.813932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.659 [2024-11-28 08:09:13.826865] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.659 [2024-11-28 08:09:13.826879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.659 [2024-11-28 08:09:13.840196] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.659 [2024-11-28 08:09:13.840210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.659 [2024-11-28 08:09:13.852860] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.659 [2024-11-28 08:09:13.852874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.659 [2024-11-28 08:09:13.865938] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.659 [2024-11-28 08:09:13.865953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.659 [2024-11-28 08:09:13.878864] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.659 [2024-11-28 08:09:13.878878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.659 [2024-11-28 08:09:13.892279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.659 [2024-11-28 08:09:13.892294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.659 [2024-11-28 08:09:13.904685] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.659 [2024-11-28 08:09:13.904699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.659 [2024-11-28 08:09:13.917965] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.659 [2024-11-28 08:09:13.917980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.659 [2024-11-28 08:09:13.931656] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.659 [2024-11-28 08:09:13.931670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.659 [2024-11-28 08:09:13.944007] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.659 [2024-11-28 08:09:13.944021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.919 [2024-11-28 08:09:13.957116] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.919 [2024-11-28 08:09:13.957131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.919 [2024-11-28 08:09:13.970678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.919 [2024-11-28 08:09:13.970692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.919 [2024-11-28 08:09:13.982939] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.919 [2024-11-28 08:09:13.982954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.919 [2024-11-28 08:09:13.996428] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.919 [2024-11-28 08:09:13.996443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.919 [2024-11-28 08:09:14.009550] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.919 [2024-11-28 08:09:14.009565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.919 [2024-11-28 08:09:14.022246] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.919 [2024-11-28 08:09:14.022261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.919 [2024-11-28 08:09:14.035711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.919 [2024-11-28 08:09:14.035725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.919 [2024-11-28 08:09:14.048509] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.919 [2024-11-28 08:09:14.048523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.919 [2024-11-28 08:09:14.061254] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.919 [2024-11-28 08:09:14.061268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.919 [2024-11-28 08:09:14.073685] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.919 [2024-11-28 08:09:14.073699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.919 [2024-11-28 08:09:14.086586] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.919 [2024-11-28 08:09:14.086601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.919 [2024-11-28 08:09:14.099754] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.919 [2024-11-28 08:09:14.099768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.919 [2024-11-28 08:09:14.112302] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.919 [2024-11-28 08:09:14.112317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.919 [2024-11-28 08:09:14.125230] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.919 [2024-11-28 08:09:14.125244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.919 [2024-11-28 08:09:14.138368] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.919 [2024-11-28 08:09:14.138382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.919 [2024-11-28 08:09:14.151646] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.919 [2024-11-28 08:09:14.151660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.919 [2024-11-28 08:09:14.165194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.919 [2024-11-28 08:09:14.165208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.919 [2024-11-28 08:09:14.178814] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.919 [2024-11-28 08:09:14.178828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.919 [2024-11-28 08:09:14.191250] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.919 [2024-11-28 08:09:14.191265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:16.919 [2024-11-28 08:09:14.203625] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:16.919 [2024-11-28 08:09:14.203640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.180 [2024-11-28 08:09:14.217018] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.180 [2024-11-28 08:09:14.217033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.180 [2024-11-28 08:09:14.229884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.180 [2024-11-28 08:09:14.229898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.180 [2024-11-28 08:09:14.242929] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.180 [2024-11-28 08:09:14.242943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.180 [2024-11-28 08:09:14.255949] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.180 [2024-11-28 08:09:14.255964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.180 [2024-11-28 08:09:14.268494] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.180 [2024-11-28 08:09:14.268509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.180 [2024-11-28 08:09:14.281759] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.180 [2024-11-28 08:09:14.281773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.180 [2024-11-28 08:09:14.295024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.180 [2024-11-28 08:09:14.295040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.180 [2024-11-28 08:09:14.307636] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.180 [2024-11-28 08:09:14.307650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.180 [2024-11-28 08:09:14.320065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.180 [2024-11-28 08:09:14.320079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.180 [2024-11-28 08:09:14.332773] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.180 [2024-11-28 08:09:14.332787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.180 19255.50 IOPS, 150.43 MiB/s [2024-11-28T07:09:14.469Z] [2024-11-28 08:09:14.345113] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.180 [2024-11-28 08:09:14.345127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.180 [2024-11-28 08:09:14.357722] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.180 [2024-11-28 08:09:14.357736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.180 [2024-11-28 08:09:14.371026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.180 [2024-11-28 08:09:14.371040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.180 [2024-11-28 08:09:14.383612] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.180 [2024-11-28 08:09:14.383626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.180 [2024-11-28 08:09:14.396868] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.180 [2024-11-28 08:09:14.396882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.180 [2024-11-28 08:09:14.409505] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.180 [2024-11-28 08:09:14.409519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.180 [2024-11-28 08:09:14.421955] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.180 [2024-11-28 08:09:14.421970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.180 [2024-11-28 08:09:14.435329] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.180 [2024-11-28 08:09:14.435343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.180 [2024-11-28 08:09:14.448245] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.180 [2024-11-28 08:09:14.448264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.180 [2024-11-28 08:09:14.461825] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.180 [2024-11-28 08:09:14.461840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.440 [2024-11-28 08:09:14.475197] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.440 [2024-11-28 08:09:14.475212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.440 [2024-11-28 08:09:14.488137] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.440 [2024-11-28 08:09:14.488152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.440 [2024-11-28 08:09:14.500815] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.440 [2024-11-28 08:09:14.500830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.440 [2024-11-28 08:09:14.514311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.440 [2024-11-28 08:09:14.514326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.440 [2024-11-28 08:09:14.527806] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.440 [2024-11-28 08:09:14.527820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.440 [2024-11-28 08:09:14.540898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.440 [2024-11-28 08:09:14.540913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.440 [2024-11-28 08:09:14.553693] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.440 [2024-11-28 08:09:14.553708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.440 [2024-11-28 08:09:14.567059] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.440 [2024-11-28 08:09:14.567074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.440 [2024-11-28 08:09:14.580369] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.440 [2024-11-28 08:09:14.580384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.440 [2024-11-28 08:09:14.593579] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.440 [2024-11-28 08:09:14.593594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.440 [2024-11-28 08:09:14.607161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.440 [2024-11-28 08:09:14.607176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.440 [2024-11-28 08:09:14.620619] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.440 [2024-11-28 08:09:14.620634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.440 [2024-11-28 08:09:14.633330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.440 [2024-11-28 08:09:14.633345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.440 [2024-11-28 08:09:14.646620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.440 [2024-11-28 08:09:14.646635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.440 [2024-11-28 08:09:14.659534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.440 [2024-11-28 08:09:14.659548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.440 [2024-11-28 08:09:14.672413] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.440 [2024-11-28 08:09:14.672428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.440 [2024-11-28 08:09:14.685794] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.440 [2024-11-28 08:09:14.685809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.440 [2024-11-28 08:09:14.699069] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.440 [2024-11-28 08:09:14.699088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.440 [2024-11-28 08:09:14.712117] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.440 [2024-11-28 08:09:14.712131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.440 [2024-11-28 08:09:14.725547] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.440 [2024-11-28 08:09:14.725562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.701 [2024-11-28 08:09:14.739215] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.701 [2024-11-28 08:09:14.739231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.701 [2024-11-28 08:09:14.752537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.701 [2024-11-28 08:09:14.752551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.701 [2024-11-28 08:09:14.766090] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.701 [2024-11-28 08:09:14.766105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.701 [2024-11-28 08:09:14.779019] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.701 [2024-11-28 08:09:14.779034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.701 [2024-11-28 08:09:14.792262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.701 [2024-11-28 08:09:14.792277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.701 [2024-11-28 08:09:14.804463] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.701 [2024-11-28 08:09:14.804477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.701 [2024-11-28 08:09:14.817609] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.701 [2024-11-28 08:09:14.817623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.701 [2024-11-28 08:09:14.830884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.701 [2024-11-28 08:09:14.830900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.701 [2024-11-28 08:09:14.844169] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.701 [2024-11-28 08:09:14.844185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.701 [2024-11-28 08:09:14.857373] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.701 [2024-11-28 08:09:14.857388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.701 [2024-11-28 08:09:14.870048] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.701 [2024-11-28 08:09:14.870063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.701 [2024-11-28 08:09:14.882350] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.701 [2024-11-28 08:09:14.882365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.701 [2024-11-28 08:09:14.895691] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.701 [2024-11-28 08:09:14.895706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.701 [2024-11-28 08:09:14.908604] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.701 [2024-11-28 08:09:14.908618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.701 [2024-11-28 08:09:14.921442] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.701 [2024-11-28 08:09:14.921457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.701 [2024-11-28 08:09:14.935072] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.701 [2024-11-28 08:09:14.935087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.701 [2024-11-28 08:09:14.948674] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.701 [2024-11-28 08:09:14.948692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.701 [2024-11-28 08:09:14.962323] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.701 [2024-11-28 08:09:14.962338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.701 [2024-11-28 08:09:14.974880] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.701 [2024-11-28 08:09:14.974895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.701 [2024-11-28 08:09:14.988257] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.701 [2024-11-28 08:09:14.988272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.962 [2024-11-28 08:09:15.001580] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.962 [2024-11-28 08:09:15.001596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.962 [2024-11-28 08:09:15.014356] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.962 [2024-11-28 08:09:15.014371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.962 [2024-11-28 08:09:15.027041] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.962 [2024-11-28 08:09:15.027056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.962 [2024-11-28 08:09:15.039884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.962 [2024-11-28 08:09:15.039899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.962 [2024-11-28 08:09:15.053424] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.962 [2024-11-28 08:09:15.053439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.962 [2024-11-28 08:09:15.066963] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.962 [2024-11-28 08:09:15.066978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.963 [2024-11-28 08:09:15.080262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.963 [2024-11-28 08:09:15.080277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.963 [2024-11-28 08:09:15.093119] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.963 [2024-11-28 08:09:15.093134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.963 [2024-11-28 08:09:15.105823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.963 [2024-11-28 08:09:15.105838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.963 [2024-11-28 08:09:15.118278] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.963 [2024-11-28 08:09:15.118293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.963 [2024-11-28 08:09:15.131515] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.963 [2024-11-28 08:09:15.131530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.963 [2024-11-28 08:09:15.144809] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.963 [2024-11-28 08:09:15.144823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.963 [2024-11-28 08:09:15.158100] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.963 [2024-11-28 08:09:15.158115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.963 [2024-11-28 08:09:15.170985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.963 [2024-11-28 08:09:15.171001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.963 [2024-11-28 08:09:15.184089] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.963 [2024-11-28 08:09:15.184104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.963 [2024-11-28 08:09:15.197507] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.963 [2024-11-28 08:09:15.197526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.963 [2024-11-28 08:09:15.210803] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.963 [2024-11-28 08:09:15.210817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.963 [2024-11-28 08:09:15.223551] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.963 [2024-11-28 08:09:15.223566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.963 [2024-11-28 08:09:15.236648] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:17.963 [2024-11-28 08:09:15.236662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.224 [2024-11-28 08:09:15.250331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.224 [2024-11-28 08:09:15.250346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.224 [2024-11-28 08:09:15.263653] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.224 [2024-11-28 08:09:15.263669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.224 [2024-11-28 08:09:15.276757] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.224 [2024-11-28 08:09:15.276772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.224 [2024-11-28 08:09:15.290132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.224 [2024-11-28 08:09:15.290147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.224 [2024-11-28 08:09:15.303478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.224 [2024-11-28 08:09:15.303493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.224 [2024-11-28 08:09:15.316988] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.224 [2024-11-28 08:09:15.317003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.224 [2024-11-28 08:09:15.330212] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.224 [2024-11-28 08:09:15.330226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.224 19278.67 IOPS, 150.61 MiB/s [2024-11-28T07:09:15.513Z] [2024-11-28 08:09:15.343511] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.224 [2024-11-28 08:09:15.343526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.224 [2024-11-28 08:09:15.356983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.224 [2024-11-28 08:09:15.356997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.224 [2024-11-28 08:09:15.370137] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.224 [2024-11-28 08:09:15.370151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.224 [2024-11-28 08:09:15.383552] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.224 [2024-11-28 08:09:15.383567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.224 [2024-11-28 08:09:15.397100] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.224 [2024-11-28 08:09:15.397114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.224 [2024-11-28 08:09:15.409924] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.224 [2024-11-28 08:09:15.409939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.224 [2024-11-28 08:09:15.423441] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.224 [2024-11-28 08:09:15.423455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.224 [2024-11-28 08:09:15.436761] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.224 [2024-11-28 08:09:15.436776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.224 [2024-11-28 08:09:15.449830] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.225 [2024-11-28 08:09:15.449845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.225 [2024-11-28 08:09:15.462715] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.225 [2024-11-28 08:09:15.462729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.225 [2024-11-28 08:09:15.475224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.225 [2024-11-28 08:09:15.475238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.225 [2024-11-28 08:09:15.487308] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.225 [2024-11-28 08:09:15.487322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.225 [2024-11-28 08:09:15.500258] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.225 [2024-11-28 08:09:15.500272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.485 [2024-11-28 08:09:15.513300] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.485 [2024-11-28 08:09:15.513314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.485 [2024-11-28 08:09:15.525863] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.485 [2024-11-28 08:09:15.525877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.485 [2024-11-28 08:09:15.539051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.485 [2024-11-28 08:09:15.539065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.485 [2024-11-28 08:09:15.552548] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.485 [2024-11-28 08:09:15.552562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.485 [2024-11-28 08:09:15.565279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.485 [2024-11-28 08:09:15.565294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.485 [2024-11-28 08:09:15.578521] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.485 [2024-11-28 08:09:15.578535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.485 [2024-11-28 08:09:15.592091] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.485 [2024-11-28 08:09:15.592105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.485 [2024-11-28 08:09:15.605377] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.485 [2024-11-28 08:09:15.605391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.485 [2024-11-28 08:09:15.618550] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.485 [2024-11-28 08:09:15.618564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.485 [2024-11-28 08:09:15.631893] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.485 [2024-11-28 08:09:15.631907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.485 [2024-11-28 08:09:15.645102] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.485 [2024-11-28 08:09:15.645117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.485 [2024-11-28 08:09:15.657651] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.485 [2024-11-28 08:09:15.657665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.486 [2024-11-28 08:09:15.670326] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.486 [2024-11-28 08:09:15.670340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.486 [2024-11-28 08:09:15.682689] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.486 [2024-11-28 08:09:15.682703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.486 [2024-11-28 08:09:15.695953] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.486 [2024-11-28 08:09:15.695968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.486 [2024-11-28 08:09:15.708886] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.486 [2024-11-28 08:09:15.708901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.486 [2024-11-28 08:09:15.722247] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.486 [2024-11-28 08:09:15.722261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.486 [2024-11-28 08:09:15.734349] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.486 [2024-11-28 08:09:15.734363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.486 [2024-11-28 08:09:15.747675] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.486 [2024-11-28 08:09:15.747690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.486 [2024-11-28 08:09:15.760732] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.486 [2024-11-28 08:09:15.760746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.486 [2024-11-28 08:09:15.773637] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.486 [2024-11-28 08:09:15.773651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.746 [2024-11-28 08:09:15.787094] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.746 [2024-11-28 08:09:15.787109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.746 [2024-11-28 08:09:15.799659] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.746 [2024-11-28 08:09:15.799674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.746 [2024-11-28 08:09:15.813101] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.746 [2024-11-28 08:09:15.813115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.746 [2024-11-28 08:09:15.825862] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.746 [2024-11-28 08:09:15.825877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.746 [2024-11-28 08:09:15.839190] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.746 [2024-11-28 08:09:15.839204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.746 [2024-11-28 08:09:15.852080] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.746 [2024-11-28 08:09:15.852096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.746 [2024-11-28 08:09:15.865427] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.746 [2024-11-28 08:09:15.865441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.746 [2024-11-28 08:09:15.878688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.746 [2024-11-28 08:09:15.878703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.746 [2024-11-28 08:09:15.891374] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.746 [2024-11-28 08:09:15.891388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.746 [2024-11-28 08:09:15.904640] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.746 [2024-11-28 08:09:15.904654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.746 [2024-11-28 08:09:15.917596] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.746 [2024-11-28 08:09:15.917611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.746 [2024-11-28 08:09:15.930567] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.746 [2024-11-28 08:09:15.930585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.746 [2024-11-28 08:09:15.943120] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.746 [2024-11-28 08:09:15.943134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.746 [2024-11-28 08:09:15.956565] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.746 [2024-11-28 08:09:15.956579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.746 [2024-11-28 08:09:15.970438] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.746 [2024-11-28 08:09:15.970452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.746 [2024-11-28 08:09:15.983505] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.746 [2024-11-28 08:09:15.983519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.746 [2024-11-28 08:09:15.997010] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.746 [2024-11-28 08:09:15.997025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.746 [2024-11-28 08:09:16.010398] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.746 [2024-11-28 08:09:16.010412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:18.746 [2024-11-28 08:09:16.024293] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:18.746 [2024-11-28 08:09:16.024307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.007 [2024-11-28 08:09:16.036837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.007 [2024-11-28 08:09:16.036852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.007 [2024-11-28 08:09:16.050034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.007 [2024-11-28 08:09:16.050048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.007 [2024-11-28 08:09:16.062913] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.007 [2024-11-28 08:09:16.062927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.007 [2024-11-28 08:09:16.075936] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.007 [2024-11-28 08:09:16.075951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.007 [2024-11-28 08:09:16.089464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.007 [2024-11-28 08:09:16.089479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.007 [2024-11-28 08:09:16.102798] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.007 [2024-11-28 08:09:16.102813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.007 [2024-11-28 08:09:16.116230] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.007 [2024-11-28 08:09:16.116244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.007 [2024-11-28 08:09:16.128650] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.007 [2024-11-28 08:09:16.128665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.007 [2024-11-28 08:09:16.141031] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.007 [2024-11-28 08:09:16.141045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.007 [2024-11-28 08:09:16.153427] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.007 [2024-11-28 08:09:16.153441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.007 [2024-11-28 08:09:16.165980] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.007 [2024-11-28 08:09:16.165994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.007 [2024-11-28 08:09:16.178629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.007 [2024-11-28 08:09:16.178648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.007 [2024-11-28 08:09:16.191207] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.007 [2024-11-28 08:09:16.191222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.007 [2024-11-28 08:09:16.203499] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.007 [2024-11-28 08:09:16.203514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.007 [2024-11-28 08:09:16.216433] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.007 [2024-11-28 08:09:16.216448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.007 [2024-11-28 08:09:16.229778] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.007 [2024-11-28 08:09:16.229793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.007 [2024-11-28 08:09:16.243056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.007 [2024-11-28 08:09:16.243071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.007 [2024-11-28 08:09:16.255263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.007 [2024-11-28 08:09:16.255278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.007 [2024-11-28 08:09:16.269019] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.007 [2024-11-28 08:09:16.269034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.007 [2024-11-28 08:09:16.282226] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.007 [2024-11-28 08:09:16.282241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.007 [2024-11-28 08:09:16.294968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.007 [2024-11-28 08:09:16.294984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.267 [2024-11-28 08:09:16.307581] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.267 [2024-11-28 08:09:16.307597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.267 [2024-11-28 08:09:16.320577] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.267 [2024-11-28 08:09:16.320591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.267 [2024-11-28 08:09:16.333824] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.267 [2024-11-28 08:09:16.333839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.267 19288.00 IOPS, 150.69 MiB/s [2024-11-28T07:09:16.556Z] [2024-11-28 08:09:16.347228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.267 [2024-11-28 08:09:16.347243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.267 [2024-11-28 08:09:16.360690] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.267 [2024-11-28 08:09:16.360705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.267 [2024-11-28 08:09:16.373699] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.267 [2024-11-28 08:09:16.373714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.267 [2024-11-28 08:09:16.386989] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.267 [2024-11-28 08:09:16.387004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.267 [2024-11-28 08:09:16.400214] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.267 [2024-11-28 08:09:16.400229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.267 [2024-11-28 08:09:16.413602] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.267 [2024-11-28 08:09:16.413617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.267 [2024-11-28 08:09:16.427100] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.267 [2024-11-28 08:09:16.427118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.267 [2024-11-28 08:09:16.440503] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.267 [2024-11-28 08:09:16.440517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.267 [2024-11-28 08:09:16.453325] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.267 [2024-11-28 08:09:16.453340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.267 [2024-11-28 08:09:16.465987] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.267 [2024-11-28 08:09:16.466002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.267 [2024-11-28 08:09:16.479181] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.267 [2024-11-28 08:09:16.479197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.267 [2024-11-28 08:09:16.492331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.267 [2024-11-28 08:09:16.492347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.267 [2024-11-28 08:09:16.505407] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.267 [2024-11-28 08:09:16.505422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.267 [2024-11-28 08:09:16.518797] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.267 [2024-11-28 08:09:16.518812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.267 [2024-11-28 08:09:16.531636] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.267 [2024-11-28 08:09:16.531651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.267 [2024-11-28 08:09:16.544689] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.267 [2024-11-28 08:09:16.544704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.528 [2024-11-28 08:09:16.558011] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.528 [2024-11-28 08:09:16.558027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.528 [2024-11-28 08:09:16.571699] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.528 [2024-11-28 08:09:16.571714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.528 [2024-11-28 08:09:16.584612] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.528 [2024-11-28 08:09:16.584627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.528 [2024-11-28 08:09:16.597793] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.528 [2024-11-28 08:09:16.597807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.528 [2024-11-28 08:09:16.611336] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.528 [2024-11-28 08:09:16.611351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.528 [2024-11-28 08:09:16.624139] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.528 [2024-11-28 08:09:16.624154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.528 [2024-11-28 08:09:16.637414] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.528 [2024-11-28 08:09:16.637429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.528 [2024-11-28 08:09:16.650693] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.528 [2024-11-28 08:09:16.650708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.528 [2024-11-28 08:09:16.663397] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.528 [2024-11-28 08:09:16.663412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.528 [2024-11-28 08:09:16.676835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.528 [2024-11-28 08:09:16.676850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.528 [2024-11-28 08:09:16.689850] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.528 [2024-11-28 08:09:16.689865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.528 [2024-11-28 08:09:16.702825] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.528 [2024-11-28 08:09:16.702840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.528 [2024-11-28 08:09:16.715757] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.528 [2024-11-28 08:09:16.715772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.528 [2024-11-28 08:09:16.729195] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.528 [2024-11-28 08:09:16.729210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.528 [2024-11-28 08:09:16.742841] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.528 [2024-11-28 08:09:16.742856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.528 [2024-11-28 08:09:16.755679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.528 [2024-11-28 08:09:16.755694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.528 [2024-11-28 08:09:16.768746] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.528 [2024-11-28 08:09:16.768761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.528 [2024-11-28 08:09:16.781670] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.528 [2024-11-28 08:09:16.781686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.528 [2024-11-28 08:09:16.795086] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.528 [2024-11-28 08:09:16.795101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.528 [2024-11-28 08:09:16.808468] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.528 [2024-11-28 08:09:16.808484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.790 [2024-11-28 08:09:16.821564] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.790 [2024-11-28 08:09:16.821579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.790 [2024-11-28 08:09:16.834328] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.790 [2024-11-28 08:09:16.834342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.790 [2024-11-28 08:09:16.847665] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.790 [2024-11-28 08:09:16.847679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.790 [2024-11-28 08:09:16.860487] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.790 [2024-11-28 08:09:16.860502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.790 [2024-11-28 08:09:16.873272] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.790 [2024-11-28 08:09:16.873287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.790 [2024-11-28 08:09:16.885906] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.790 [2024-11-28 08:09:16.885921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.790 [2024-11-28 08:09:16.898510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.790 [2024-11-28 08:09:16.898525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.790 [2024-11-28 08:09:16.911951] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.790 [2024-11-28 08:09:16.911965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.790 [2024-11-28 08:09:16.924688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.790 [2024-11-28 08:09:16.924702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.790 [2024-11-28 08:09:16.937812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.790 [2024-11-28 08:09:16.937827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.790 [2024-11-28 08:09:16.951365] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.790 [2024-11-28 08:09:16.951380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.790 [2024-11-28 08:09:16.964657] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.790 [2024-11-28 08:09:16.964672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.790 [2024-11-28 08:09:16.978099] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.790 [2024-11-28 08:09:16.978114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.790 [2024-11-28 08:09:16.990974] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.790 [2024-11-28 08:09:16.990988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.790 [2024-11-28 08:09:17.004478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.790 [2024-11-28 08:09:17.004492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.790 [2024-11-28 08:09:17.017482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.790 [2024-11-28 08:09:17.017496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.790 [2024-11-28 08:09:17.029681] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.790 [2024-11-28 08:09:17.029696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.790 [2024-11-28 08:09:17.042819] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.790 [2024-11-28 08:09:17.042833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.790 [2024-11-28 08:09:17.056106] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.790 [2024-11-28 08:09:17.056121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:19.790 [2024-11-28 08:09:17.069702] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:19.790 [2024-11-28 08:09:17.069716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-11-28 08:09:17.082241] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-11-28 08:09:17.082255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-11-28 08:09:17.095614] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-11-28 08:09:17.095628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-11-28 08:09:17.109033] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-11-28 08:09:17.109047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-11-28 08:09:17.122287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-11-28 08:09:17.122302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-11-28 08:09:17.135716] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-11-28 08:09:17.135731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-11-28 08:09:17.148290] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-11-28 08:09:17.148305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-11-28 08:09:17.161669] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-11-28 08:09:17.161683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-11-28 08:09:17.175253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-11-28 08:09:17.175267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-11-28 08:09:17.188075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-11-28 08:09:17.188089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-11-28 08:09:17.201659] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-11-28 08:09:17.201674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-11-28 08:09:17.214877] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-11-28 08:09:17.214892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-11-28 08:09:17.228219] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-11-28 08:09:17.228233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-11-28 08:09:17.241260] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-11-28 08:09:17.241275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-11-28 08:09:17.254365] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-11-28 08:09:17.254379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-11-28 08:09:17.267593] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-11-28 08:09:17.267607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-11-28 08:09:17.280638] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-11-28 08:09:17.280653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-11-28 08:09:17.294119] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-11-28 08:09:17.294133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-11-28 08:09:17.307987] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-11-28 08:09:17.308001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-11-28 08:09:17.320543] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-11-28 08:09:17.320557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.051 [2024-11-28 08:09:17.332938] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.051 [2024-11-28 08:09:17.332951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.311 [2024-11-28 08:09:17.346294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.311 [2024-11-28 08:09:17.346309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.311 19301.80 IOPS, 150.80 MiB/s 00:10:20.311 Latency(us) 00:10:20.311 [2024-11-28T07:09:17.600Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:20.312 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:20.312 Nvme1n1 : 5.01 19304.03 150.81 0.00 0.00 6625.74 2812.59 14636.37 00:10:20.312 [2024-11-28T07:09:17.601Z] =================================================================================================================== 00:10:20.312 [2024-11-28T07:09:17.601Z] Total : 19304.03 150.81 0.00 0.00 6625.74 2812.59 14636.37 00:10:20.312 [2024-11-28 08:09:17.356086] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.312 [2024-11-28 08:09:17.356099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.312 [2024-11-28 08:09:17.368116] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.312 [2024-11-28 08:09:17.368133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.312 [2024-11-28 08:09:17.380152] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.312 [2024-11-28 08:09:17.380167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.312 [2024-11-28 08:09:17.392181] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.312 [2024-11-28 08:09:17.392194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.312 [2024-11-28 08:09:17.404211] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.312 [2024-11-28 08:09:17.404223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.312 [2024-11-28 08:09:17.416235] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.312 [2024-11-28 08:09:17.416245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.312 [2024-11-28 08:09:17.428265] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.312 [2024-11-28 08:09:17.428273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.312 [2024-11-28 08:09:17.440297] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.312 [2024-11-28 08:09:17.440308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.312 [2024-11-28 08:09:17.452327] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:20.312 [2024-11-28 08:09:17.452335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1817149) - No such process 00:10:20.312 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1817149 00:10:20.312 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.312 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.312 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.312 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.312 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:20.312 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.312 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.312 delay0 00:10:20.312 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.312 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:20.312 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.312 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.312 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.312 08:09:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:20.312 [2024-11-28 08:09:17.586629] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:28.451 Initializing NVMe Controllers 00:10:28.451 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:28.451 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:28.451 Initialization complete. Launching workers. 00:10:28.451 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 257, failed: 27456 00:10:28.451 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 27607, failed to submit 106 00:10:28.451 success 27526, unsuccessful 81, failed 0 00:10:28.451 08:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:28.451 08:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:28.451 08:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:28.451 08:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:28.451 08:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:28.451 08:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:28.451 08:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:28.451 08:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:28.451 rmmod nvme_tcp 00:10:28.451 rmmod nvme_fabrics 00:10:28.451 rmmod nvme_keyring 00:10:28.451 08:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:28.451 08:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:28.451 08:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:28.451 08:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1814217 ']' 00:10:28.451 08:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1814217 00:10:28.451 08:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1814217 ']' 00:10:28.451 08:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1814217 00:10:28.451 08:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:28.451 08:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:28.451 08:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1814217 00:10:28.451 08:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:28.451 08:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:28.452 08:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1814217' 00:10:28.452 killing process with pid 1814217 00:10:28.452 08:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1814217 00:10:28.452 08:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1814217 00:10:28.452 08:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:28.452 08:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:28.452 08:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:28.452 08:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:28.452 08:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:28.452 08:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:28.452 08:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:28.452 08:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:28.452 08:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:28.452 08:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.452 08:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:28.452 08:09:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.834 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:29.834 00:10:29.834 real 0m34.674s 00:10:29.834 user 0m45.721s 00:10:29.834 sys 0m11.948s 00:10:29.834 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.834 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:29.834 ************************************ 00:10:29.834 END TEST nvmf_zcopy 00:10:29.834 ************************************ 00:10:29.834 08:09:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:29.834 08:09:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:29.834 08:09:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.834 08:09:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:30.095 ************************************ 00:10:30.095 START TEST nvmf_nmic 00:10:30.095 ************************************ 00:10:30.095 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:30.095 * Looking for test storage... 00:10:30.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.095 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:30.095 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:30.095 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:30.095 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:30.095 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:30.095 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:30.095 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:30.095 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:30.095 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:30.095 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:30.095 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:30.095 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:30.095 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:30.095 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:30.096 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:30.096 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:30.096 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:30.096 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:30.096 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:30.096 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:30.096 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:30.096 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:30.096 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:30.096 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:30.096 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:30.096 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:30.096 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:30.096 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:30.096 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:30.096 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:30.096 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:30.096 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:30.096 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:30.096 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:30.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.096 --rc genhtml_branch_coverage=1 00:10:30.096 --rc genhtml_function_coverage=1 00:10:30.096 --rc genhtml_legend=1 00:10:30.096 --rc geninfo_all_blocks=1 00:10:30.096 --rc geninfo_unexecuted_blocks=1 00:10:30.096 00:10:30.096 ' 00:10:30.096 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:30.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.096 --rc genhtml_branch_coverage=1 00:10:30.096 --rc genhtml_function_coverage=1 00:10:30.096 --rc genhtml_legend=1 00:10:30.096 --rc geninfo_all_blocks=1 00:10:30.096 --rc geninfo_unexecuted_blocks=1 00:10:30.096 00:10:30.096 ' 00:10:30.096 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:30.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.096 --rc genhtml_branch_coverage=1 00:10:30.096 --rc genhtml_function_coverage=1 00:10:30.096 --rc genhtml_legend=1 00:10:30.096 --rc geninfo_all_blocks=1 00:10:30.096 --rc geninfo_unexecuted_blocks=1 00:10:30.096 00:10:30.096 ' 00:10:30.096 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:30.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.096 --rc genhtml_branch_coverage=1 00:10:30.096 --rc genhtml_function_coverage=1 00:10:30.096 --rc genhtml_legend=1 00:10:30.096 --rc geninfo_all_blocks=1 00:10:30.096 --rc geninfo_unexecuted_blocks=1 00:10:30.096 00:10:30.096 ' 00:10:30.096 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:30.096 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:30.096 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.096 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.096 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.096 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.096 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.096 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.096 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.096 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.096 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.096 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.357 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:30.358 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:30.358 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:38.502 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:38.502 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.502 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:38.503 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:38.503 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:38.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:38.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:10:38.503 00:10:38.503 --- 10.0.0.2 ping statistics --- 00:10:38.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.503 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:38.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:38.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:10:38.503 00:10:38.503 --- 10.0.0.1 ping statistics --- 00:10:38.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.503 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1823855 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1823855 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1823855 ']' 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:38.503 08:09:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:38.503 [2024-11-28 08:09:34.988536] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:10:38.503 [2024-11-28 08:09:34.988599] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:38.503 [2024-11-28 08:09:35.090554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:38.503 [2024-11-28 08:09:35.144864] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:38.503 [2024-11-28 08:09:35.144922] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:38.503 [2024-11-28 08:09:35.144931] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:38.503 [2024-11-28 08:09:35.144938] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:38.503 [2024-11-28 08:09:35.144945] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:38.503 [2024-11-28 08:09:35.147036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:38.503 [2024-11-28 08:09:35.147242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:38.503 [2024-11-28 08:09:35.147339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:38.503 [2024-11-28 08:09:35.147340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:38.766 [2024-11-28 08:09:35.865414] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:38.766 Malloc0 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:38.766 [2024-11-28 08:09:35.942239] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:38.766 test case1: single bdev can't be used in multiple subsystems 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.766 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:38.767 [2024-11-28 08:09:35.978047] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:38.767 [2024-11-28 08:09:35.978080] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:38.767 [2024-11-28 08:09:35.978089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.767 request: 00:10:38.767 { 00:10:38.767 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:38.767 "namespace": { 00:10:38.767 "bdev_name": "Malloc0", 00:10:38.767 "no_auto_visible": false, 00:10:38.767 "hide_metadata": false 00:10:38.767 }, 00:10:38.767 "method": "nvmf_subsystem_add_ns", 00:10:38.767 "req_id": 1 00:10:38.767 } 00:10:38.767 Got JSON-RPC error response 00:10:38.767 response: 00:10:38.767 { 00:10:38.767 "code": -32602, 00:10:38.767 "message": "Invalid parameters" 00:10:38.767 } 00:10:38.767 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:38.767 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:38.767 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:38.767 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:38.767 Adding namespace failed - expected result. 00:10:38.767 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:38.767 test case2: host connect to nvmf target in multiple paths 00:10:38.767 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:38.767 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.767 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:38.767 [2024-11-28 08:09:35.990262] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:38.767 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.767 08:09:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:40.686 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:42.069 08:09:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:42.069 08:09:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:42.069 08:09:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:42.069 08:09:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:42.069 08:09:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:43.977 08:09:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:43.977 08:09:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:43.977 08:09:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:43.977 08:09:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:43.977 08:09:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:43.977 08:09:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:43.977 08:09:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:43.977 [global] 00:10:43.977 thread=1 00:10:43.977 invalidate=1 00:10:43.977 rw=write 00:10:43.977 time_based=1 00:10:43.977 runtime=1 00:10:43.977 ioengine=libaio 00:10:43.977 direct=1 00:10:43.977 bs=4096 00:10:43.977 iodepth=1 00:10:43.977 norandommap=0 00:10:43.977 numjobs=1 00:10:43.977 00:10:43.977 verify_dump=1 00:10:43.977 verify_backlog=512 00:10:43.977 verify_state_save=0 00:10:43.977 do_verify=1 00:10:43.977 verify=crc32c-intel 00:10:43.977 [job0] 00:10:43.977 filename=/dev/nvme0n1 00:10:43.977 Could not set queue depth (nvme0n1) 00:10:44.559 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:44.559 fio-3.35 00:10:44.559 Starting 1 thread 00:10:45.500 00:10:45.500 job0: (groupid=0, jobs=1): err= 0: pid=1825399: Thu Nov 28 08:09:42 2024 00:10:45.500 read: IOPS=258, BW=1032KiB/s (1057kB/s)(1060KiB/1027msec) 00:10:45.500 slat (nsec): min=26360, max=61435, avg=27368.30, stdev=3240.62 00:10:45.500 clat (usec): min=799, max=42020, avg=2666.95, stdev=8127.09 00:10:45.500 lat (usec): min=827, max=42047, avg=2694.32, stdev=8127.05 00:10:45.500 clat percentiles (usec): 00:10:45.500 | 1.00th=[ 857], 5.00th=[ 889], 10.00th=[ 906], 20.00th=[ 938], 00:10:45.500 | 30.00th=[ 955], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 996], 00:10:45.500 | 70.00th=[ 1012], 80.00th=[ 1029], 90.00th=[ 1057], 95.00th=[ 1090], 00:10:45.500 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:45.500 | 99.99th=[42206] 00:10:45.500 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:10:45.500 slat (nsec): min=9025, max=65619, avg=28724.91, stdev=10745.33 00:10:45.500 clat (usec): min=248, max=786, avg=570.08, stdev=87.41 00:10:45.500 lat (usec): min=259, max=821, avg=598.80, stdev=93.11 00:10:45.500 clat percentiles (usec): 00:10:45.500 | 1.00th=[ 355], 5.00th=[ 416], 10.00th=[ 445], 20.00th=[ 490], 00:10:45.500 | 30.00th=[ 529], 40.00th=[ 562], 50.00th=[ 578], 60.00th=[ 594], 00:10:45.500 | 70.00th=[ 619], 80.00th=[ 652], 90.00th=[ 676], 95.00th=[ 693], 00:10:45.500 | 99.00th=[ 742], 99.50th=[ 750], 99.90th=[ 791], 99.95th=[ 791], 00:10:45.500 | 99.99th=[ 791] 00:10:45.500 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:45.500 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:45.500 lat (usec) : 250=0.13%, 500=14.80%, 750=50.45%, 1000=21.88% 00:10:45.500 lat (msec) : 2=11.33%, 50=1.42% 00:10:45.500 cpu : usr=1.85%, sys=2.53%, ctx=777, majf=0, minf=1 00:10:45.500 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:45.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.500 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:45.500 issued rwts: total=265,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:45.500 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:45.500 00:10:45.500 Run status group 0 (all jobs): 00:10:45.500 READ: bw=1032KiB/s (1057kB/s), 1032KiB/s-1032KiB/s (1057kB/s-1057kB/s), io=1060KiB (1085kB), run=1027-1027msec 00:10:45.500 WRITE: bw=1994KiB/s (2042kB/s), 1994KiB/s-1994KiB/s (2042kB/s-2042kB/s), io=2048KiB (2097kB), run=1027-1027msec 00:10:45.500 00:10:45.500 Disk stats (read/write): 00:10:45.500 nvme0n1: ios=311/512, merge=0/0, ticks=586/240, in_queue=826, util=93.89% 00:10:45.500 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:45.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:45.761 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:45.761 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:45.761 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:45.761 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.761 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:45.761 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.761 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:45.761 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:45.761 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:45.761 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:45.761 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:45.761 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:45.761 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:45.761 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:45.761 08:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:45.761 rmmod nvme_tcp 00:10:45.761 rmmod nvme_fabrics 00:10:45.761 rmmod nvme_keyring 00:10:45.761 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:45.761 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:45.761 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:45.761 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1823855 ']' 00:10:45.761 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1823855 00:10:45.761 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1823855 ']' 00:10:45.761 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1823855 00:10:45.761 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:45.761 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:45.761 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1823855 00:10:46.022 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:46.022 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:46.022 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1823855' 00:10:46.022 killing process with pid 1823855 00:10:46.022 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1823855 00:10:46.022 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1823855 00:10:46.022 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:46.022 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:46.022 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:46.022 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:46.022 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:46.022 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:46.022 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:46.022 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:46.022 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:46.022 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.022 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.022 08:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:48.565 00:10:48.565 real 0m18.121s 00:10:48.565 user 0m49.393s 00:10:48.565 sys 0m6.639s 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:48.565 ************************************ 00:10:48.565 END TEST nvmf_nmic 00:10:48.565 ************************************ 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:48.565 ************************************ 00:10:48.565 START TEST nvmf_fio_target 00:10:48.565 ************************************ 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:48.565 * Looking for test storage... 00:10:48.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:48.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.565 --rc genhtml_branch_coverage=1 00:10:48.565 --rc genhtml_function_coverage=1 00:10:48.565 --rc genhtml_legend=1 00:10:48.565 --rc geninfo_all_blocks=1 00:10:48.565 --rc geninfo_unexecuted_blocks=1 00:10:48.565 00:10:48.565 ' 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:48.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.565 --rc genhtml_branch_coverage=1 00:10:48.565 --rc genhtml_function_coverage=1 00:10:48.565 --rc genhtml_legend=1 00:10:48.565 --rc geninfo_all_blocks=1 00:10:48.565 --rc geninfo_unexecuted_blocks=1 00:10:48.565 00:10:48.565 ' 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:48.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.565 --rc genhtml_branch_coverage=1 00:10:48.565 --rc genhtml_function_coverage=1 00:10:48.565 --rc genhtml_legend=1 00:10:48.565 --rc geninfo_all_blocks=1 00:10:48.565 --rc geninfo_unexecuted_blocks=1 00:10:48.565 00:10:48.565 ' 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:48.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.565 --rc genhtml_branch_coverage=1 00:10:48.565 --rc genhtml_function_coverage=1 00:10:48.565 --rc genhtml_legend=1 00:10:48.565 --rc geninfo_all_blocks=1 00:10:48.565 --rc geninfo_unexecuted_blocks=1 00:10:48.565 00:10:48.565 ' 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:48.565 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:48.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:48.566 08:09:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.709 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:56.709 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:56.709 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:56.709 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:56.709 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:56.709 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:56.709 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:56.709 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:56.709 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:56.710 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:56.710 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:56.710 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:56.710 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:56.710 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:56.711 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:56.711 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:56.711 08:09:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:56.711 08:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:56.711 08:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:56.711 08:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:56.711 08:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:56.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:56.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:10:56.711 00:10:56.711 --- 10.0.0.2 ping statistics --- 00:10:56.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.711 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:10:56.711 08:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:56.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:56.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:10:56.711 00:10:56.711 --- 10.0.0.1 ping statistics --- 00:10:56.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.711 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:10:56.711 08:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:56.711 08:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:56.711 08:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:56.711 08:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:56.711 08:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:56.711 08:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:56.711 08:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:56.711 08:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:56.711 08:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:56.711 08:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:56.711 08:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:56.711 08:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:56.711 08:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.711 08:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1829839 00:10:56.711 08:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1829839 00:10:56.711 08:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:56.711 08:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1829839 ']' 00:10:56.711 08:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.711 08:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:56.711 08:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.711 08:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:56.711 08:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.711 [2024-11-28 08:09:53.229134] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:10:56.711 [2024-11-28 08:09:53.229214] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.711 [2024-11-28 08:09:53.328762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:56.711 [2024-11-28 08:09:53.382217] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:56.711 [2024-11-28 08:09:53.382271] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:56.711 [2024-11-28 08:09:53.382280] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:56.711 [2024-11-28 08:09:53.382287] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:56.711 [2024-11-28 08:09:53.382294] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:56.711 [2024-11-28 08:09:53.384624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.711 [2024-11-28 08:09:53.384785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.711 [2024-11-28 08:09:53.384947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.711 [2024-11-28 08:09:53.384947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:56.972 08:09:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.972 08:09:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:56.972 08:09:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:56.972 08:09:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:56.972 08:09:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.972 08:09:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:56.972 08:09:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:57.234 [2024-11-28 08:09:54.266688] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:57.234 08:09:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:57.495 08:09:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:57.496 08:09:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:57.496 08:09:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:57.496 08:09:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:57.758 08:09:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:57.758 08:09:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:58.018 08:09:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:58.018 08:09:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:58.279 08:09:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:58.279 08:09:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:58.279 08:09:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:58.539 08:09:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:58.539 08:09:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:58.799 08:09:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:58.799 08:09:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:59.060 08:09:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:59.060 08:09:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:59.060 08:09:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:59.321 08:09:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:59.321 08:09:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:59.582 08:09:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:59.582 [2024-11-28 08:09:56.782749] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:59.582 08:09:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:59.844 08:09:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:00.104 08:09:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:01.489 08:09:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:01.489 08:09:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:01.489 08:09:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:01.489 08:09:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:01.489 08:09:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:01.489 08:09:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:03.404 08:10:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:03.404 08:10:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:03.404 08:10:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:03.404 08:10:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:03.404 08:10:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:03.404 08:10:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:03.666 08:10:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:03.666 [global] 00:11:03.666 thread=1 00:11:03.666 invalidate=1 00:11:03.666 rw=write 00:11:03.666 time_based=1 00:11:03.666 runtime=1 00:11:03.666 ioengine=libaio 00:11:03.666 direct=1 00:11:03.666 bs=4096 00:11:03.666 iodepth=1 00:11:03.666 norandommap=0 00:11:03.666 numjobs=1 00:11:03.666 00:11:03.666 verify_dump=1 00:11:03.666 verify_backlog=512 00:11:03.666 verify_state_save=0 00:11:03.666 do_verify=1 00:11:03.666 verify=crc32c-intel 00:11:03.666 [job0] 00:11:03.666 filename=/dev/nvme0n1 00:11:03.666 [job1] 00:11:03.666 filename=/dev/nvme0n2 00:11:03.666 [job2] 00:11:03.666 filename=/dev/nvme0n3 00:11:03.666 [job3] 00:11:03.666 filename=/dev/nvme0n4 00:11:03.666 Could not set queue depth (nvme0n1) 00:11:03.666 Could not set queue depth (nvme0n2) 00:11:03.666 Could not set queue depth (nvme0n3) 00:11:03.666 Could not set queue depth (nvme0n4) 00:11:03.926 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:03.926 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:03.926 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:03.926 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:03.926 fio-3.35 00:11:03.926 Starting 4 threads 00:11:05.315 00:11:05.315 job0: (groupid=0, jobs=1): err= 0: pid=1831673: Thu Nov 28 08:10:02 2024 00:11:05.315 read: IOPS=151, BW=607KiB/s (622kB/s)(608KiB/1001msec) 00:11:05.315 slat (nsec): min=7122, max=44108, avg=24884.72, stdev=5688.52 00:11:05.315 clat (usec): min=747, max=42925, avg=4767.24, stdev=11912.33 00:11:05.315 lat (usec): min=773, max=42950, avg=4792.12, stdev=11912.20 00:11:05.315 clat percentiles (usec): 00:11:05.315 | 1.00th=[ 775], 5.00th=[ 832], 10.00th=[ 873], 20.00th=[ 914], 00:11:05.315 | 30.00th=[ 947], 40.00th=[ 971], 50.00th=[ 996], 60.00th=[ 1004], 00:11:05.315 | 70.00th=[ 1029], 80.00th=[ 1057], 90.00th=[ 1221], 95.00th=[42206], 00:11:05.315 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:11:05.315 | 99.99th=[42730] 00:11:05.315 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:11:05.315 slat (nsec): min=9890, max=46870, avg=20589.46, stdev=11245.79 00:11:05.315 clat (usec): min=279, max=861, avg=504.31, stdev=103.44 00:11:05.315 lat (usec): min=290, max=895, avg=524.90, stdev=102.74 00:11:05.315 clat percentiles (usec): 00:11:05.315 | 1.00th=[ 330], 5.00th=[ 351], 10.00th=[ 375], 20.00th=[ 416], 00:11:05.315 | 30.00th=[ 453], 40.00th=[ 474], 50.00th=[ 486], 60.00th=[ 515], 00:11:05.315 | 70.00th=[ 545], 80.00th=[ 586], 90.00th=[ 635], 95.00th=[ 693], 00:11:05.315 | 99.00th=[ 807], 99.50th=[ 840], 99.90th=[ 865], 99.95th=[ 865], 00:11:05.315 | 99.99th=[ 865] 00:11:05.315 bw ( KiB/s): min= 4087, max= 4087, per=47.35%, avg=4087.00, stdev= 0.00, samples=1 00:11:05.315 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:11:05.315 lat (usec) : 500=43.37%, 750=31.93%, 1000=14.46% 00:11:05.315 lat (msec) : 2=7.98%, 4=0.15%, 50=2.11% 00:11:05.315 cpu : usr=0.40%, sys=1.80%, ctx=667, majf=0, minf=1 00:11:05.315 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:05.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.315 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.315 issued rwts: total=152,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.315 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:05.315 job1: (groupid=0, jobs=1): err= 0: pid=1831674: Thu Nov 28 08:10:02 2024 00:11:05.315 read: IOPS=366, BW=1466KiB/s (1501kB/s)(1516KiB/1034msec) 00:11:05.315 slat (nsec): min=8316, max=42346, avg=25504.90, stdev=1742.40 00:11:05.315 clat (usec): min=784, max=42753, avg=1855.71, stdev=5917.65 00:11:05.315 lat (usec): min=810, max=42778, avg=1881.22, stdev=5917.30 00:11:05.315 clat percentiles (usec): 00:11:05.315 | 1.00th=[ 807], 5.00th=[ 848], 10.00th=[ 889], 20.00th=[ 938], 00:11:05.315 | 30.00th=[ 963], 40.00th=[ 988], 50.00th=[ 996], 60.00th=[ 1012], 00:11:05.315 | 70.00th=[ 1029], 80.00th=[ 1045], 90.00th=[ 1074], 95.00th=[ 1123], 00:11:05.315 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:11:05.315 | 99.99th=[42730] 00:11:05.315 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:11:05.315 slat (nsec): min=9543, max=55198, avg=29038.49, stdev=9642.33 00:11:05.315 clat (usec): min=265, max=866, avg=584.10, stdev=112.19 00:11:05.315 lat (usec): min=274, max=911, avg=613.14, stdev=116.39 00:11:05.315 clat percentiles (usec): 00:11:05.315 | 1.00th=[ 338], 5.00th=[ 383], 10.00th=[ 433], 20.00th=[ 486], 00:11:05.315 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 586], 60.00th=[ 611], 00:11:05.315 | 70.00th=[ 635], 80.00th=[ 685], 90.00th=[ 725], 95.00th=[ 766], 00:11:05.315 | 99.00th=[ 816], 99.50th=[ 840], 99.90th=[ 865], 99.95th=[ 865], 00:11:05.315 | 99.99th=[ 865] 00:11:05.315 bw ( KiB/s): min= 4087, max= 4087, per=47.35%, avg=4087.00, stdev= 0.00, samples=1 00:11:05.315 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:11:05.315 lat (usec) : 500=13.36%, 750=40.63%, 1000=25.03% 00:11:05.315 lat (msec) : 2=20.09%, 50=0.90% 00:11:05.315 cpu : usr=1.74%, sys=1.94%, ctx=891, majf=0, minf=2 00:11:05.315 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:05.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.315 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.315 issued rwts: total=379,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.315 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:05.315 job2: (groupid=0, jobs=1): err= 0: pid=1831675: Thu Nov 28 08:10:02 2024 00:11:05.315 read: IOPS=17, BW=69.4KiB/s (71.0kB/s)(72.0KiB/1038msec) 00:11:05.315 slat (nsec): min=27434, max=28298, avg=27821.78, stdev=226.96 00:11:05.315 clat (usec): min=925, max=43056, avg=39828.19, stdev=9720.56 00:11:05.315 lat (usec): min=954, max=43084, avg=39856.01, stdev=9720.44 00:11:05.315 clat percentiles (usec): 00:11:05.315 | 1.00th=[ 930], 5.00th=[ 930], 10.00th=[41157], 20.00th=[41681], 00:11:05.315 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:11:05.315 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:11:05.315 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:11:05.315 | 99.99th=[43254] 00:11:05.315 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:11:05.315 slat (nsec): min=9496, max=58289, avg=31469.51, stdev=10662.57 00:11:05.315 clat (usec): min=259, max=819, avg=588.10, stdev=101.33 00:11:05.315 lat (usec): min=270, max=854, avg=619.57, stdev=106.35 00:11:05.315 clat percentiles (usec): 00:11:05.315 | 1.00th=[ 326], 5.00th=[ 424], 10.00th=[ 453], 20.00th=[ 494], 00:11:05.315 | 30.00th=[ 545], 40.00th=[ 570], 50.00th=[ 594], 60.00th=[ 619], 00:11:05.315 | 70.00th=[ 652], 80.00th=[ 676], 90.00th=[ 717], 95.00th=[ 742], 00:11:05.315 | 99.00th=[ 775], 99.50th=[ 807], 99.90th=[ 816], 99.95th=[ 816], 00:11:05.315 | 99.99th=[ 816] 00:11:05.315 bw ( KiB/s): min= 4096, max= 4096, per=47.45%, avg=4096.00, stdev= 0.00, samples=1 00:11:05.315 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:05.315 lat (usec) : 500=20.57%, 750=72.64%, 1000=3.58% 00:11:05.315 lat (msec) : 50=3.21% 00:11:05.315 cpu : usr=1.54%, sys=1.45%, ctx=531, majf=0, minf=1 00:11:05.315 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:05.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.315 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.315 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:05.316 job3: (groupid=0, jobs=1): err= 0: pid=1831676: Thu Nov 28 08:10:02 2024 00:11:05.316 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:05.316 slat (nsec): min=26721, max=61143, avg=27496.58, stdev=2686.86 00:11:05.316 clat (usec): min=750, max=42735, avg=1050.28, stdev=1846.75 00:11:05.316 lat (usec): min=778, max=42762, avg=1077.77, stdev=1846.72 00:11:05.316 clat percentiles (usec): 00:11:05.316 | 1.00th=[ 791], 5.00th=[ 865], 10.00th=[ 898], 20.00th=[ 938], 00:11:05.316 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 971], 60.00th=[ 988], 00:11:05.316 | 70.00th=[ 996], 80.00th=[ 1012], 90.00th=[ 1037], 95.00th=[ 1057], 00:11:05.316 | 99.00th=[ 1090], 99.50th=[ 1156], 99.90th=[42730], 99.95th=[42730], 00:11:05.316 | 99.99th=[42730] 00:11:05.316 write: IOPS=703, BW=2813KiB/s (2881kB/s)(2816KiB/1001msec); 0 zone resets 00:11:05.316 slat (nsec): min=9322, max=65866, avg=30069.89, stdev=10180.40 00:11:05.316 clat (usec): min=275, max=936, avg=593.57, stdev=105.73 00:11:05.316 lat (usec): min=285, max=969, avg=623.64, stdev=110.40 00:11:05.316 clat percentiles (usec): 00:11:05.316 | 1.00th=[ 355], 5.00th=[ 404], 10.00th=[ 449], 20.00th=[ 510], 00:11:05.316 | 30.00th=[ 545], 40.00th=[ 570], 50.00th=[ 594], 60.00th=[ 619], 00:11:05.316 | 70.00th=[ 660], 80.00th=[ 693], 90.00th=[ 725], 95.00th=[ 750], 00:11:05.316 | 99.00th=[ 799], 99.50th=[ 840], 99.90th=[ 938], 99.95th=[ 938], 00:11:05.316 | 99.99th=[ 938] 00:11:05.316 bw ( KiB/s): min= 4096, max= 4096, per=47.45%, avg=4096.00, stdev= 0.00, samples=1 00:11:05.316 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:05.316 lat (usec) : 500=10.44%, 750=44.24%, 1000=33.14% 00:11:05.316 lat (msec) : 2=12.09%, 50=0.08% 00:11:05.316 cpu : usr=2.60%, sys=4.60%, ctx=1216, majf=0, minf=1 00:11:05.316 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:05.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.316 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.316 issued rwts: total=512,704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:05.316 00:11:05.316 Run status group 0 (all jobs): 00:11:05.316 READ: bw=4089KiB/s (4187kB/s), 69.4KiB/s-2046KiB/s (71.0kB/s-2095kB/s), io=4244KiB (4346kB), run=1001-1038msec 00:11:05.316 WRITE: bw=8632KiB/s (8839kB/s), 1973KiB/s-2813KiB/s (2020kB/s-2881kB/s), io=8960KiB (9175kB), run=1001-1038msec 00:11:05.316 00:11:05.316 Disk stats (read/write): 00:11:05.316 nvme0n1: ios=39/512, merge=0/0, ticks=1515/257, in_queue=1772, util=96.69% 00:11:05.316 nvme0n2: ios=404/512, merge=0/0, ticks=819/278, in_queue=1097, util=95.81% 00:11:05.316 nvme0n3: ios=36/512, merge=0/0, ticks=1435/246, in_queue=1681, util=97.14% 00:11:05.316 nvme0n4: ios=460/512, merge=0/0, ticks=482/244, in_queue=726, util=89.51% 00:11:05.316 08:10:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:05.316 [global] 00:11:05.316 thread=1 00:11:05.316 invalidate=1 00:11:05.316 rw=randwrite 00:11:05.316 time_based=1 00:11:05.316 runtime=1 00:11:05.316 ioengine=libaio 00:11:05.316 direct=1 00:11:05.316 bs=4096 00:11:05.316 iodepth=1 00:11:05.316 norandommap=0 00:11:05.316 numjobs=1 00:11:05.316 00:11:05.316 verify_dump=1 00:11:05.316 verify_backlog=512 00:11:05.316 verify_state_save=0 00:11:05.316 do_verify=1 00:11:05.316 verify=crc32c-intel 00:11:05.316 [job0] 00:11:05.316 filename=/dev/nvme0n1 00:11:05.316 [job1] 00:11:05.316 filename=/dev/nvme0n2 00:11:05.316 [job2] 00:11:05.316 filename=/dev/nvme0n3 00:11:05.316 [job3] 00:11:05.316 filename=/dev/nvme0n4 00:11:05.316 Could not set queue depth (nvme0n1) 00:11:05.316 Could not set queue depth (nvme0n2) 00:11:05.316 Could not set queue depth (nvme0n3) 00:11:05.316 Could not set queue depth (nvme0n4) 00:11:05.577 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:05.577 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:05.577 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:05.577 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:05.577 fio-3.35 00:11:05.577 Starting 4 threads 00:11:06.962 00:11:06.962 job0: (groupid=0, jobs=1): err= 0: pid=1832201: Thu Nov 28 08:10:04 2024 00:11:06.962 read: IOPS=17, BW=71.1KiB/s (72.8kB/s)(72.0KiB/1013msec) 00:11:06.962 slat (nsec): min=25087, max=29305, avg=25593.06, stdev=944.32 00:11:06.962 clat (usec): min=1342, max=42989, avg=39770.42, stdev=9599.44 00:11:06.962 lat (usec): min=1367, max=43014, avg=39796.01, stdev=9599.49 00:11:06.962 clat percentiles (usec): 00:11:06.962 | 1.00th=[ 1336], 5.00th=[ 1336], 10.00th=[41157], 20.00th=[41681], 00:11:06.962 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:11:06.962 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:11:06.962 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:11:06.962 | 99.99th=[42730] 00:11:06.962 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:11:06.963 slat (nsec): min=9307, max=69352, avg=25778.86, stdev=10347.34 00:11:06.963 clat (usec): min=143, max=1001, avg=546.92, stdev=142.91 00:11:06.963 lat (usec): min=155, max=1033, avg=572.70, stdev=147.42 00:11:06.963 clat percentiles (usec): 00:11:06.963 | 1.00th=[ 231], 5.00th=[ 285], 10.00th=[ 367], 20.00th=[ 420], 00:11:06.963 | 30.00th=[ 469], 40.00th=[ 506], 50.00th=[ 553], 60.00th=[ 594], 00:11:06.963 | 70.00th=[ 635], 80.00th=[ 685], 90.00th=[ 725], 95.00th=[ 758], 00:11:06.963 | 99.00th=[ 824], 99.50th=[ 857], 99.90th=[ 1004], 99.95th=[ 1004], 00:11:06.963 | 99.99th=[ 1004] 00:11:06.963 bw ( KiB/s): min= 4087, max= 4087, per=46.10%, avg=4087.00, stdev= 0.00, samples=1 00:11:06.963 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:11:06.963 lat (usec) : 250=2.26%, 500=34.91%, 750=53.21%, 1000=6.04% 00:11:06.963 lat (msec) : 2=0.38%, 50=3.21% 00:11:06.963 cpu : usr=1.09%, sys=0.89%, ctx=531, majf=0, minf=1 00:11:06.963 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.963 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.963 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.963 job1: (groupid=0, jobs=1): err= 0: pid=1832202: Thu Nov 28 08:10:04 2024 00:11:06.963 read: IOPS=471, BW=1886KiB/s (1931kB/s)(1888KiB/1001msec) 00:11:06.963 slat (nsec): min=7082, max=61223, avg=24814.06, stdev=6460.70 00:11:06.963 clat (usec): min=307, max=43011, avg=1393.18, stdev=4964.53 00:11:06.963 lat (usec): min=316, max=43038, avg=1417.99, stdev=4964.77 00:11:06.963 clat percentiles (usec): 00:11:06.963 | 1.00th=[ 461], 5.00th=[ 553], 10.00th=[ 586], 20.00th=[ 644], 00:11:06.963 | 30.00th=[ 701], 40.00th=[ 734], 50.00th=[ 766], 60.00th=[ 807], 00:11:06.963 | 70.00th=[ 840], 80.00th=[ 906], 90.00th=[ 1090], 95.00th=[ 1156], 00:11:06.963 | 99.00th=[41157], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:11:06.963 | 99.99th=[43254] 00:11:06.963 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:11:06.963 slat (nsec): min=9677, max=60732, avg=29980.85, stdev=9244.11 00:11:06.963 clat (usec): min=254, max=993, avg=600.43, stdev=138.07 00:11:06.963 lat (usec): min=264, max=1026, avg=630.42, stdev=141.55 00:11:06.963 clat percentiles (usec): 00:11:06.963 | 1.00th=[ 273], 5.00th=[ 371], 10.00th=[ 429], 20.00th=[ 486], 00:11:06.963 | 30.00th=[ 529], 40.00th=[ 562], 50.00th=[ 594], 60.00th=[ 644], 00:11:06.963 | 70.00th=[ 676], 80.00th=[ 717], 90.00th=[ 775], 95.00th=[ 840], 00:11:06.963 | 99.00th=[ 922], 99.50th=[ 938], 99.90th=[ 996], 99.95th=[ 996], 00:11:06.963 | 99.99th=[ 996] 00:11:06.963 bw ( KiB/s): min= 4096, max= 4096, per=46.20%, avg=4096.00, stdev= 0.00, samples=1 00:11:06.963 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:06.963 lat (usec) : 500=13.01%, 750=54.27%, 1000=25.51% 00:11:06.963 lat (msec) : 2=6.50%, 50=0.71% 00:11:06.963 cpu : usr=1.30%, sys=3.00%, ctx=987, majf=0, minf=1 00:11:06.963 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.963 issued rwts: total=472,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.963 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.963 job2: (groupid=0, jobs=1): err= 0: pid=1832204: Thu Nov 28 08:10:04 2024 00:11:06.963 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:06.963 slat (nsec): min=7520, max=44670, avg=25352.73, stdev=1938.54 00:11:06.963 clat (usec): min=472, max=1246, avg=988.46, stdev=85.95 00:11:06.963 lat (usec): min=497, max=1290, avg=1013.81, stdev=86.16 00:11:06.963 clat percentiles (usec): 00:11:06.963 | 1.00th=[ 750], 5.00th=[ 840], 10.00th=[ 889], 20.00th=[ 947], 00:11:06.963 | 30.00th=[ 963], 40.00th=[ 979], 50.00th=[ 988], 60.00th=[ 1004], 00:11:06.963 | 70.00th=[ 1029], 80.00th=[ 1045], 90.00th=[ 1090], 95.00th=[ 1123], 00:11:06.963 | 99.00th=[ 1172], 99.50th=[ 1188], 99.90th=[ 1254], 99.95th=[ 1254], 00:11:06.963 | 99.99th=[ 1254] 00:11:06.963 write: IOPS=757, BW=3029KiB/s (3102kB/s)(3032KiB/1001msec); 0 zone resets 00:11:06.963 slat (nsec): min=9465, max=51376, avg=28870.88, stdev=8468.68 00:11:06.963 clat (usec): min=218, max=866, avg=592.73, stdev=112.34 00:11:06.963 lat (usec): min=229, max=915, avg=621.60, stdev=115.62 00:11:06.963 clat percentiles (usec): 00:11:06.963 | 1.00th=[ 302], 5.00th=[ 383], 10.00th=[ 449], 20.00th=[ 490], 00:11:06.963 | 30.00th=[ 537], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 627], 00:11:06.963 | 70.00th=[ 668], 80.00th=[ 693], 90.00th=[ 725], 95.00th=[ 758], 00:11:06.963 | 99.00th=[ 824], 99.50th=[ 857], 99.90th=[ 865], 99.95th=[ 865], 00:11:06.963 | 99.99th=[ 865] 00:11:06.963 bw ( KiB/s): min= 4087, max= 4087, per=46.10%, avg=4087.00, stdev= 0.00, samples=1 00:11:06.963 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:11:06.963 lat (usec) : 250=0.08%, 500=13.31%, 750=43.15%, 1000=26.30% 00:11:06.963 lat (msec) : 2=17.17% 00:11:06.963 cpu : usr=2.00%, sys=3.50%, ctx=1270, majf=0, minf=2 00:11:06.963 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.963 issued rwts: total=512,758,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.963 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.963 job3: (groupid=0, jobs=1): err= 0: pid=1832208: Thu Nov 28 08:10:04 2024 00:11:06.963 read: IOPS=17, BW=69.6KiB/s (71.2kB/s)(72.0KiB/1035msec) 00:11:06.963 slat (nsec): min=25485, max=30266, avg=26103.78, stdev=1057.74 00:11:06.963 clat (usec): min=1045, max=43032, avg=39722.90, stdev=9668.35 00:11:06.963 lat (usec): min=1075, max=43057, avg=39749.01, stdev=9667.31 00:11:06.963 clat percentiles (usec): 00:11:06.963 | 1.00th=[ 1045], 5.00th=[ 1045], 10.00th=[41157], 20.00th=[41681], 00:11:06.963 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:11:06.963 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[43254], 00:11:06.963 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:11:06.963 | 99.99th=[43254] 00:11:06.963 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:11:06.963 slat (nsec): min=9527, max=54914, avg=26493.48, stdev=10163.75 00:11:06.963 clat (usec): min=247, max=988, avg=590.74, stdev=123.51 00:11:06.963 lat (usec): min=259, max=1020, avg=617.24, stdev=128.52 00:11:06.963 clat percentiles (usec): 00:11:06.963 | 1.00th=[ 293], 5.00th=[ 388], 10.00th=[ 424], 20.00th=[ 482], 00:11:06.963 | 30.00th=[ 529], 40.00th=[ 570], 50.00th=[ 594], 60.00th=[ 619], 00:11:06.963 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 742], 95.00th=[ 783], 00:11:06.963 | 99.00th=[ 857], 99.50th=[ 922], 99.90th=[ 988], 99.95th=[ 988], 00:11:06.963 | 99.99th=[ 988] 00:11:06.963 bw ( KiB/s): min= 4087, max= 4087, per=46.10%, avg=4087.00, stdev= 0.00, samples=1 00:11:06.963 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:11:06.963 lat (usec) : 250=0.38%, 500=23.96%, 750=63.40%, 1000=8.87% 00:11:06.963 lat (msec) : 2=0.19%, 50=3.21% 00:11:06.963 cpu : usr=0.58%, sys=1.35%, ctx=530, majf=0, minf=1 00:11:06.963 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.963 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.963 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.963 00:11:06.963 Run status group 0 (all jobs): 00:11:06.963 READ: bw=3942KiB/s (4037kB/s), 69.6KiB/s-2046KiB/s (71.2kB/s-2095kB/s), io=4080KiB (4178kB), run=1001-1035msec 00:11:06.963 WRITE: bw=8866KiB/s (9078kB/s), 1979KiB/s-3029KiB/s (2026kB/s-3102kB/s), io=9176KiB (9396kB), run=1001-1035msec 00:11:06.963 00:11:06.963 Disk stats (read/write): 00:11:06.963 nvme0n1: ios=62/512, merge=0/0, ticks=531/261, in_queue=792, util=81.96% 00:11:06.963 nvme0n2: ios=205/512, merge=0/0, ticks=918/294, in_queue=1212, util=97.82% 00:11:06.963 nvme0n3: ios=444/512, merge=0/0, ticks=443/288, in_queue=731, util=86.44% 00:11:06.963 nvme0n4: ios=17/512, merge=0/0, ticks=674/270, in_queue=944, util=91.29% 00:11:06.963 08:10:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:06.963 [global] 00:11:06.963 thread=1 00:11:06.963 invalidate=1 00:11:06.963 rw=write 00:11:06.963 time_based=1 00:11:06.963 runtime=1 00:11:06.963 ioengine=libaio 00:11:06.963 direct=1 00:11:06.963 bs=4096 00:11:06.963 iodepth=128 00:11:06.963 norandommap=0 00:11:06.963 numjobs=1 00:11:06.963 00:11:06.963 verify_dump=1 00:11:06.963 verify_backlog=512 00:11:06.963 verify_state_save=0 00:11:06.963 do_verify=1 00:11:06.963 verify=crc32c-intel 00:11:06.963 [job0] 00:11:06.963 filename=/dev/nvme0n1 00:11:06.963 [job1] 00:11:06.963 filename=/dev/nvme0n2 00:11:06.963 [job2] 00:11:06.963 filename=/dev/nvme0n3 00:11:06.963 [job3] 00:11:06.963 filename=/dev/nvme0n4 00:11:06.963 Could not set queue depth (nvme0n1) 00:11:06.963 Could not set queue depth (nvme0n2) 00:11:06.963 Could not set queue depth (nvme0n3) 00:11:06.963 Could not set queue depth (nvme0n4) 00:11:07.531 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:07.531 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:07.531 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:07.531 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:07.531 fio-3.35 00:11:07.531 Starting 4 threads 00:11:08.939 00:11:08.939 job0: (groupid=0, jobs=1): err= 0: pid=1832733: Thu Nov 28 08:10:05 2024 00:11:08.939 read: IOPS=8999, BW=35.2MiB/s (36.9MB/s)(35.3MiB/1005msec) 00:11:08.939 slat (nsec): min=925, max=8154.6k, avg=58814.37, stdev=423387.66 00:11:08.939 clat (usec): min=2211, max=14866, avg=7578.17, stdev=1879.08 00:11:08.939 lat (usec): min=2528, max=14868, avg=7636.98, stdev=1894.22 00:11:08.939 clat percentiles (usec): 00:11:08.939 | 1.00th=[ 3326], 5.00th=[ 5407], 10.00th=[ 5735], 20.00th=[ 6194], 00:11:08.939 | 30.00th=[ 6521], 40.00th=[ 6915], 50.00th=[ 7177], 60.00th=[ 7504], 00:11:08.939 | 70.00th=[ 8029], 80.00th=[ 8979], 90.00th=[10421], 95.00th=[11469], 00:11:08.939 | 99.00th=[12780], 99.50th=[13304], 99.90th=[14877], 99.95th=[14877], 00:11:08.939 | 99.99th=[14877] 00:11:08.939 write: IOPS=9170, BW=35.8MiB/s (37.6MB/s)(36.0MiB/1005msec); 0 zone resets 00:11:08.939 slat (nsec): min=1608, max=5930.7k, avg=46268.41, stdev=262525.60 00:11:08.939 clat (usec): min=1161, max=14844, avg=6394.62, stdev=1461.02 00:11:08.939 lat (usec): min=1172, max=14848, avg=6440.89, stdev=1475.36 00:11:08.939 clat percentiles (usec): 00:11:08.939 | 1.00th=[ 2442], 5.00th=[ 3490], 10.00th=[ 4178], 20.00th=[ 5145], 00:11:08.939 | 30.00th=[ 6456], 40.00th=[ 6652], 50.00th=[ 6849], 60.00th=[ 6980], 00:11:08.939 | 70.00th=[ 7046], 80.00th=[ 7177], 90.00th=[ 7373], 95.00th=[ 7701], 00:11:08.939 | 99.00th=[ 9765], 99.50th=[10028], 99.90th=[13042], 99.95th=[13173], 00:11:08.939 | 99.99th=[14877] 00:11:08.939 bw ( KiB/s): min=36864, max=36864, per=35.61%, avg=36864.00, stdev= 0.00, samples=2 00:11:08.939 iops : min= 9216, max= 9216, avg=9216.00, stdev= 0.00, samples=2 00:11:08.939 lat (msec) : 2=0.12%, 4=4.88%, 10=88.72%, 20=6.27% 00:11:08.940 cpu : usr=6.18%, sys=8.07%, ctx=959, majf=0, minf=1 00:11:08.940 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:11:08.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:08.940 issued rwts: total=9044,9216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:08.940 job1: (groupid=0, jobs=1): err= 0: pid=1832736: Thu Nov 28 08:10:05 2024 00:11:08.940 read: IOPS=8652, BW=33.8MiB/s (35.4MB/s)(33.9MiB/1003msec) 00:11:08.940 slat (nsec): min=887, max=3704.0k, avg=59510.32, stdev=380181.50 00:11:08.940 clat (usec): min=1661, max=11539, avg=7457.11, stdev=899.70 00:11:08.940 lat (usec): min=2939, max=11554, avg=7516.62, stdev=953.82 00:11:08.940 clat percentiles (usec): 00:11:08.940 | 1.00th=[ 4883], 5.00th=[ 5800], 10.00th=[ 6783], 20.00th=[ 7111], 00:11:08.940 | 30.00th=[ 7242], 40.00th=[ 7373], 50.00th=[ 7504], 60.00th=[ 7570], 00:11:08.940 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8225], 95.00th=[ 8979], 00:11:08.940 | 99.00th=[10290], 99.50th=[10552], 99.90th=[11076], 99.95th=[11207], 00:11:08.940 | 99.99th=[11600] 00:11:08.940 write: IOPS=8677, BW=33.9MiB/s (35.5MB/s)(34.0MiB/1003msec); 0 zone resets 00:11:08.940 slat (nsec): min=1546, max=3428.1k, avg=51451.38, stdev=229055.23 00:11:08.940 clat (usec): min=4001, max=11248, avg=7161.85, stdev=758.27 00:11:08.940 lat (usec): min=4010, max=11280, avg=7213.30, stdev=769.81 00:11:08.940 clat percentiles (usec): 00:11:08.940 | 1.00th=[ 4621], 5.00th=[ 5997], 10.00th=[ 6587], 20.00th=[ 6849], 00:11:08.940 | 30.00th=[ 6980], 40.00th=[ 7046], 50.00th=[ 7111], 60.00th=[ 7242], 00:11:08.940 | 70.00th=[ 7308], 80.00th=[ 7439], 90.00th=[ 7767], 95.00th=[ 8356], 00:11:08.940 | 99.00th=[ 9765], 99.50th=[ 9896], 99.90th=[10421], 99.95th=[10683], 00:11:08.940 | 99.99th=[11207] 00:11:08.940 bw ( KiB/s): min=33416, max=36216, per=33.63%, avg=34816.00, stdev=1979.90, samples=2 00:11:08.940 iops : min= 8354, max= 9054, avg=8704.00, stdev=494.97, samples=2 00:11:08.940 lat (msec) : 2=0.01%, 4=0.26%, 10=98.77%, 20=0.96% 00:11:08.940 cpu : usr=4.59%, sys=7.58%, ctx=1040, majf=0, minf=1 00:11:08.940 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:08.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:08.940 issued rwts: total=8678,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:08.940 job2: (groupid=0, jobs=1): err= 0: pid=1832737: Thu Nov 28 08:10:05 2024 00:11:08.940 read: IOPS=2849, BW=11.1MiB/s (11.7MB/s)(11.2MiB/1009msec) 00:11:08.940 slat (nsec): min=931, max=23227k, avg=130421.97, stdev=1001797.55 00:11:08.940 clat (usec): min=802, max=58955, avg=16704.94, stdev=8885.72 00:11:08.940 lat (usec): min=7486, max=58981, avg=16835.37, stdev=8969.32 00:11:08.940 clat percentiles (usec): 00:11:08.940 | 1.00th=[ 8455], 5.00th=[ 9634], 10.00th=[10290], 20.00th=[11338], 00:11:08.940 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12125], 60.00th=[13435], 00:11:08.940 | 70.00th=[15926], 80.00th=[23725], 90.00th=[30278], 95.00th=[40633], 00:11:08.940 | 99.00th=[42730], 99.50th=[42730], 99.90th=[49546], 99.95th=[53216], 00:11:08.940 | 99.99th=[58983] 00:11:08.940 write: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec); 0 zone resets 00:11:08.940 slat (nsec): min=1618, max=15124k, avg=200997.55, stdev=930399.04 00:11:08.940 clat (usec): min=8192, max=82072, avg=25794.39, stdev=16616.07 00:11:08.940 lat (usec): min=8218, max=82081, avg=25995.39, stdev=16721.38 00:11:08.940 clat percentiles (usec): 00:11:08.940 | 1.00th=[10290], 5.00th=[14353], 10.00th=[15008], 20.00th=[15533], 00:11:08.940 | 30.00th=[15926], 40.00th=[16319], 50.00th=[17695], 60.00th=[22414], 00:11:08.940 | 70.00th=[26870], 80.00th=[32900], 90.00th=[43779], 95.00th=[72877], 00:11:08.940 | 99.00th=[80217], 99.50th=[80217], 99.90th=[82314], 99.95th=[82314], 00:11:08.940 | 99.99th=[82314] 00:11:08.940 bw ( KiB/s): min=12288, max=12288, per=11.87%, avg=12288.00, stdev= 0.00, samples=2 00:11:08.940 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:11:08.940 lat (usec) : 1000=0.02% 00:11:08.940 lat (msec) : 10=3.92%, 20=62.86%, 50=28.75%, 100=4.46% 00:11:08.940 cpu : usr=2.18%, sys=2.98%, ctx=418, majf=0, minf=2 00:11:08.940 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:11:08.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:08.940 issued rwts: total=2875,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:08.940 job3: (groupid=0, jobs=1): err= 0: pid=1832738: Thu Nov 28 08:10:05 2024 00:11:08.940 read: IOPS=5071, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec) 00:11:08.940 slat (nsec): min=981, max=11587k, avg=93817.26, stdev=633808.33 00:11:08.940 clat (usec): min=4523, max=31644, avg=11465.24, stdev=4275.61 00:11:08.940 lat (usec): min=4527, max=31646, avg=11559.06, stdev=4318.30 00:11:08.940 clat percentiles (usec): 00:11:08.940 | 1.00th=[ 6259], 5.00th=[ 7177], 10.00th=[ 8225], 20.00th=[ 8586], 00:11:08.940 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9634], 60.00th=[11076], 00:11:08.940 | 70.00th=[11994], 80.00th=[13960], 90.00th=[16909], 95.00th=[20579], 00:11:08.940 | 99.00th=[27132], 99.50th=[29492], 99.90th=[30802], 99.95th=[31589], 00:11:08.940 | 99.99th=[31589] 00:11:08.940 write: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec); 0 zone resets 00:11:08.940 slat (nsec): min=1661, max=8589.4k, avg=96774.65, stdev=487716.26 00:11:08.940 clat (usec): min=1132, max=31642, avg=13530.21, stdev=6249.28 00:11:08.940 lat (usec): min=1142, max=31645, avg=13626.99, stdev=6294.65 00:11:08.940 clat percentiles (usec): 00:11:08.940 | 1.00th=[ 3687], 5.00th=[ 5342], 10.00th=[ 6456], 20.00th=[ 7898], 00:11:08.940 | 30.00th=[ 8455], 40.00th=[ 9896], 50.00th=[12649], 60.00th=[15533], 00:11:08.940 | 70.00th=[16188], 80.00th=[19792], 90.00th=[23725], 95.00th=[24511], 00:11:08.940 | 99.00th=[26608], 99.50th=[26870], 99.90th=[27657], 99.95th=[30540], 00:11:08.940 | 99.99th=[31589] 00:11:08.940 bw ( KiB/s): min=19088, max=21872, per=19.78%, avg=20480.00, stdev=1968.59, samples=2 00:11:08.940 iops : min= 4772, max= 5468, avg=5120.00, stdev=492.15, samples=2 00:11:08.940 lat (msec) : 2=0.02%, 4=0.65%, 10=45.15%, 20=41.73%, 50=12.45% 00:11:08.940 cpu : usr=4.47%, sys=4.67%, ctx=500, majf=0, minf=2 00:11:08.940 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:08.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:08.940 issued rwts: total=5112,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:08.940 00:11:08.940 Run status group 0 (all jobs): 00:11:08.940 READ: bw=99.5MiB/s (104MB/s), 11.1MiB/s-35.2MiB/s (11.7MB/s-36.9MB/s), io=100MiB (105MB), run=1003-1009msec 00:11:08.940 WRITE: bw=101MiB/s (106MB/s), 11.9MiB/s-35.8MiB/s (12.5MB/s-37.6MB/s), io=102MiB (107MB), run=1003-1009msec 00:11:08.940 00:11:08.940 Disk stats (read/write): 00:11:08.940 nvme0n1: ios=7218/7199, merge=0/0, ticks=52215/43873, in_queue=96088, util=94.39% 00:11:08.940 nvme0n2: ios=6680/7033, merge=0/0, ticks=23737/22872, in_queue=46609, util=82.75% 00:11:08.940 nvme0n3: ios=2264/2560, merge=0/0, ticks=14228/34532, in_queue=48760, util=91.10% 00:11:08.940 nvme0n4: ios=3701/4096, merge=0/0, ticks=40349/55952, in_queue=96301, util=88.98% 00:11:08.940 08:10:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:08.940 [global] 00:11:08.940 thread=1 00:11:08.940 invalidate=1 00:11:08.940 rw=randwrite 00:11:08.940 time_based=1 00:11:08.940 runtime=1 00:11:08.940 ioengine=libaio 00:11:08.940 direct=1 00:11:08.940 bs=4096 00:11:08.940 iodepth=128 00:11:08.940 norandommap=0 00:11:08.940 numjobs=1 00:11:08.940 00:11:08.940 verify_dump=1 00:11:08.940 verify_backlog=512 00:11:08.940 verify_state_save=0 00:11:08.940 do_verify=1 00:11:08.940 verify=crc32c-intel 00:11:08.940 [job0] 00:11:08.940 filename=/dev/nvme0n1 00:11:08.940 [job1] 00:11:08.940 filename=/dev/nvme0n2 00:11:08.940 [job2] 00:11:08.940 filename=/dev/nvme0n3 00:11:08.940 [job3] 00:11:08.940 filename=/dev/nvme0n4 00:11:08.940 Could not set queue depth (nvme0n1) 00:11:08.940 Could not set queue depth (nvme0n2) 00:11:08.940 Could not set queue depth (nvme0n3) 00:11:08.940 Could not set queue depth (nvme0n4) 00:11:09.202 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.202 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.202 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.202 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:09.202 fio-3.35 00:11:09.202 Starting 4 threads 00:11:10.594 00:11:10.594 job0: (groupid=0, jobs=1): err= 0: pid=1833254: Thu Nov 28 08:10:07 2024 00:11:10.594 read: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec) 00:11:10.594 slat (nsec): min=885, max=8053.8k, avg=80379.44, stdev=541195.76 00:11:10.594 clat (usec): min=3787, max=36528, avg=9638.42, stdev=3381.36 00:11:10.594 lat (usec): min=3792, max=36532, avg=9718.80, stdev=3424.16 00:11:10.594 clat percentiles (usec): 00:11:10.594 | 1.00th=[ 5342], 5.00th=[ 6259], 10.00th=[ 6915], 20.00th=[ 7635], 00:11:10.594 | 30.00th=[ 7898], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9372], 00:11:10.594 | 70.00th=[10421], 80.00th=[11076], 90.00th=[12649], 95.00th=[15795], 00:11:10.594 | 99.00th=[23725], 99.50th=[31851], 99.90th=[35914], 99.95th=[36439], 00:11:10.594 | 99.99th=[36439] 00:11:10.594 write: IOPS=6053, BW=23.6MiB/s (24.8MB/s)(23.8MiB/1008msec); 0 zone resets 00:11:10.594 slat (nsec): min=1569, max=7484.6k, avg=84726.78, stdev=415650.05 00:11:10.594 clat (usec): min=1148, max=36515, avg=12053.27, stdev=6114.69 00:11:10.594 lat (usec): min=1157, max=36518, avg=12138.00, stdev=6155.89 00:11:10.594 clat percentiles (usec): 00:11:10.594 | 1.00th=[ 3228], 5.00th=[ 4490], 10.00th=[ 5669], 20.00th=[ 6390], 00:11:10.594 | 30.00th=[ 6849], 40.00th=[ 7832], 50.00th=[12125], 60.00th=[13698], 00:11:10.594 | 70.00th=[14877], 80.00th=[18220], 90.00th=[20579], 95.00th=[21627], 00:11:10.594 | 99.00th=[28967], 99.50th=[32375], 99.90th=[33817], 99.95th=[33817], 00:11:10.594 | 99.99th=[36439] 00:11:10.594 bw ( KiB/s): min=21448, max=26352, per=23.71%, avg=23900.00, stdev=3467.65, samples=2 00:11:10.594 iops : min= 5362, max= 6588, avg=5975.00, stdev=866.91, samples=2 00:11:10.594 lat (msec) : 2=0.08%, 4=1.63%, 10=53.67%, 20=37.16%, 50=7.47% 00:11:10.594 cpu : usr=4.67%, sys=4.97%, ctx=555, majf=0, minf=1 00:11:10.594 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:10.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.594 issued rwts: total=5632,6102,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.594 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.594 job1: (groupid=0, jobs=1): err= 0: pid=1833255: Thu Nov 28 08:10:07 2024 00:11:10.594 read: IOPS=3849, BW=15.0MiB/s (15.8MB/s)(15.1MiB/1005msec) 00:11:10.594 slat (nsec): min=932, max=14197k, avg=108388.47, stdev=723079.49 00:11:10.594 clat (usec): min=2241, max=56205, avg=12308.85, stdev=7568.21 00:11:10.594 lat (usec): min=4639, max=56211, avg=12417.24, stdev=7664.08 00:11:10.594 clat percentiles (usec): 00:11:10.594 | 1.00th=[ 4686], 5.00th=[ 6390], 10.00th=[ 7504], 20.00th=[ 8979], 00:11:10.594 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290], 00:11:10.594 | 70.00th=[10945], 80.00th=[13042], 90.00th=[21627], 95.00th=[27657], 00:11:10.594 | 99.00th=[46400], 99.50th=[48497], 99.90th=[56361], 99.95th=[56361], 00:11:10.594 | 99.99th=[56361] 00:11:10.594 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:11:10.594 slat (nsec): min=1618, max=13367k, avg=132947.74, stdev=683680.73 00:11:10.594 clat (usec): min=4192, max=73260, avg=19458.48, stdev=13857.81 00:11:10.594 lat (usec): min=4200, max=73267, avg=19591.42, stdev=13945.39 00:11:10.594 clat percentiles (usec): 00:11:10.594 | 1.00th=[ 4293], 5.00th=[ 6849], 10.00th=[ 8291], 20.00th=[ 9634], 00:11:10.594 | 30.00th=[11469], 40.00th=[13304], 50.00th=[13829], 60.00th=[14615], 00:11:10.594 | 70.00th=[20841], 80.00th=[28967], 90.00th=[36439], 95.00th=[56886], 00:11:10.594 | 99.00th=[67634], 99.50th=[67634], 99.90th=[72877], 99.95th=[72877], 00:11:10.594 | 99.99th=[72877] 00:11:10.594 bw ( KiB/s): min=12288, max=20480, per=16.26%, avg=16384.00, stdev=5792.62, samples=2 00:11:10.594 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:11:10.594 lat (msec) : 4=0.01%, 10=33.27%, 20=44.03%, 50=19.59%, 100=3.10% 00:11:10.594 cpu : usr=2.09%, sys=3.69%, ctx=486, majf=0, minf=1 00:11:10.594 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:10.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.594 issued rwts: total=3869,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.594 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.594 job2: (groupid=0, jobs=1): err= 0: pid=1833256: Thu Nov 28 08:10:07 2024 00:11:10.594 read: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec) 00:11:10.594 slat (nsec): min=917, max=4252.0k, avg=69483.93, stdev=428068.52 00:11:10.594 clat (usec): min=5280, max=13224, avg=8665.52, stdev=1037.70 00:11:10.594 lat (usec): min=5285, max=13311, avg=8735.01, stdev=1099.74 00:11:10.594 clat percentiles (usec): 00:11:10.594 | 1.00th=[ 5997], 5.00th=[ 6718], 10.00th=[ 7373], 20.00th=[ 8160], 00:11:10.594 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8717], 60.00th=[ 8848], 00:11:10.594 | 70.00th=[ 8979], 80.00th=[ 9110], 90.00th=[ 9634], 95.00th=[10552], 00:11:10.594 | 99.00th=[11863], 99.50th=[12256], 99.90th=[12780], 99.95th=[12911], 00:11:10.594 | 99.99th=[13173] 00:11:10.594 write: IOPS=7489, BW=29.3MiB/s (30.7MB/s)(29.4MiB/1004msec); 0 zone resets 00:11:10.594 slat (nsec): min=1522, max=11855k, avg=62583.38, stdev=305906.53 00:11:10.594 clat (usec): min=3514, max=28293, avg=8650.37, stdev=2598.60 00:11:10.594 lat (usec): min=4246, max=28300, avg=8712.95, stdev=2611.58 00:11:10.594 clat percentiles (usec): 00:11:10.594 | 1.00th=[ 5080], 5.00th=[ 6456], 10.00th=[ 7439], 20.00th=[ 7832], 00:11:10.594 | 30.00th=[ 8029], 40.00th=[ 8094], 50.00th=[ 8225], 60.00th=[ 8291], 00:11:10.594 | 70.00th=[ 8455], 80.00th=[ 8717], 90.00th=[ 9634], 95.00th=[11469], 00:11:10.594 | 99.00th=[23462], 99.50th=[27132], 99.90th=[28181], 99.95th=[28181], 00:11:10.594 | 99.99th=[28181] 00:11:10.594 bw ( KiB/s): min=29000, max=30136, per=29.34%, avg=29568.00, stdev=803.27, samples=2 00:11:10.594 iops : min= 7250, max= 7534, avg=7392.00, stdev=200.82, samples=2 00:11:10.594 lat (msec) : 4=0.01%, 10=91.59%, 20=7.70%, 50=0.70% 00:11:10.594 cpu : usr=3.79%, sys=6.68%, ctx=940, majf=0, minf=1 00:11:10.594 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:10.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.594 issued rwts: total=7168,7519,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.594 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.594 job3: (groupid=0, jobs=1): err= 0: pid=1833257: Thu Nov 28 08:10:07 2024 00:11:10.594 read: IOPS=7557, BW=29.5MiB/s (31.0MB/s)(29.6MiB/1004msec) 00:11:10.594 slat (nsec): min=957, max=27985k, avg=70385.41, stdev=581904.86 00:11:10.594 clat (usec): min=3191, max=53320, avg=9377.44, stdev=5046.18 00:11:10.594 lat (usec): min=3197, max=53327, avg=9447.83, stdev=5074.90 00:11:10.594 clat percentiles (usec): 00:11:10.594 | 1.00th=[ 3982], 5.00th=[ 6325], 10.00th=[ 6783], 20.00th=[ 7308], 00:11:10.594 | 30.00th=[ 7767], 40.00th=[ 8029], 50.00th=[ 8225], 60.00th=[ 8717], 00:11:10.594 | 70.00th=[ 9241], 80.00th=[10552], 90.00th=[12125], 95.00th=[13829], 00:11:10.594 | 99.00th=[43254], 99.50th=[50070], 99.90th=[50070], 99.95th=[50070], 00:11:10.595 | 99.99th=[53216] 00:11:10.595 write: IOPS=7649, BW=29.9MiB/s (31.3MB/s)(30.0MiB/1004msec); 0 zone resets 00:11:10.595 slat (nsec): min=1599, max=8711.6k, avg=55429.51, stdev=360973.03 00:11:10.595 clat (usec): min=1181, max=15323, avg=7316.54, stdev=1663.94 00:11:10.595 lat (usec): min=1190, max=19484, avg=7371.97, stdev=1683.54 00:11:10.595 clat percentiles (usec): 00:11:10.595 | 1.00th=[ 2737], 5.00th=[ 4113], 10.00th=[ 4752], 20.00th=[ 6194], 00:11:10.595 | 30.00th=[ 7308], 40.00th=[ 7570], 50.00th=[ 7767], 60.00th=[ 7963], 00:11:10.595 | 70.00th=[ 8029], 80.00th=[ 8160], 90.00th=[ 8455], 95.00th=[10028], 00:11:10.595 | 99.00th=[10945], 99.50th=[11469], 99.90th=[14877], 99.95th=[15008], 00:11:10.595 | 99.99th=[15270] 00:11:10.595 bw ( KiB/s): min=28672, max=32768, per=30.48%, avg=30720.00, stdev=2896.31, samples=2 00:11:10.595 iops : min= 7168, max= 8192, avg=7680.00, stdev=724.08, samples=2 00:11:10.595 lat (msec) : 2=0.23%, 4=2.62%, 10=82.78%, 20=13.35%, 50=0.73% 00:11:10.595 lat (msec) : 100=0.28% 00:11:10.595 cpu : usr=5.78%, sys=7.18%, ctx=697, majf=0, minf=1 00:11:10.595 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:10.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:10.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:10.595 issued rwts: total=7588,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:10.595 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:10.595 00:11:10.595 Run status group 0 (all jobs): 00:11:10.595 READ: bw=94.0MiB/s (98.6MB/s), 15.0MiB/s-29.5MiB/s (15.8MB/s-31.0MB/s), io=94.8MiB (99.4MB), run=1004-1008msec 00:11:10.595 WRITE: bw=98.4MiB/s (103MB/s), 15.9MiB/s-29.9MiB/s (16.7MB/s-31.3MB/s), io=99.2MiB (104MB), run=1004-1008msec 00:11:10.595 00:11:10.595 Disk stats (read/write): 00:11:10.595 nvme0n1: ios=4658/4959, merge=0/0, ticks=42511/58976, in_queue=101487, util=95.99% 00:11:10.595 nvme0n2: ios=3091/3119, merge=0/0, ticks=22461/30507, in_queue=52968, util=97.86% 00:11:10.595 nvme0n3: ios=6185/6335, merge=0/0, ticks=26235/24092, in_queue=50327, util=96.00% 00:11:10.595 nvme0n4: ios=6633/6663, merge=0/0, ticks=54330/46840, in_queue=101170, util=89.54% 00:11:10.595 08:10:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:10.595 08:10:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1833590 00:11:10.595 08:10:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:10.595 08:10:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:10.595 [global] 00:11:10.595 thread=1 00:11:10.595 invalidate=1 00:11:10.595 rw=read 00:11:10.595 time_based=1 00:11:10.595 runtime=10 00:11:10.595 ioengine=libaio 00:11:10.595 direct=1 00:11:10.595 bs=4096 00:11:10.595 iodepth=1 00:11:10.595 norandommap=1 00:11:10.595 numjobs=1 00:11:10.595 00:11:10.595 [job0] 00:11:10.595 filename=/dev/nvme0n1 00:11:10.595 [job1] 00:11:10.595 filename=/dev/nvme0n2 00:11:10.595 [job2] 00:11:10.595 filename=/dev/nvme0n3 00:11:10.595 [job3] 00:11:10.595 filename=/dev/nvme0n4 00:11:10.595 Could not set queue depth (nvme0n1) 00:11:10.595 Could not set queue depth (nvme0n2) 00:11:10.595 Could not set queue depth (nvme0n3) 00:11:10.595 Could not set queue depth (nvme0n4) 00:11:10.856 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.856 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.856 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.856 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.856 fio-3.35 00:11:10.856 Starting 4 threads 00:11:13.485 08:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:13.485 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=2973696, buflen=4096 00:11:13.485 fio: pid=1833787, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:13.485 08:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:13.751 08:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:13.751 08:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:13.751 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=5480448, buflen=4096 00:11:13.751 fio: pid=1833786, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:14.013 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=11902976, buflen=4096 00:11:14.013 fio: pid=1833784, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:14.013 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:14.013 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:14.013 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=1339392, buflen=4096 00:11:14.013 fio: pid=1833785, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:14.013 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:14.013 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:14.276 00:11:14.276 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1833784: Thu Nov 28 08:10:11 2024 00:11:14.276 read: IOPS=985, BW=3939KiB/s (4034kB/s)(11.4MiB/2951msec) 00:11:14.276 slat (usec): min=5, max=12921, avg=33.32, stdev=280.82 00:11:14.276 clat (usec): min=443, max=42899, avg=968.29, stdev=1364.63 00:11:14.276 lat (usec): min=450, max=42925, avg=1001.62, stdev=1393.48 00:11:14.276 clat percentiles (usec): 00:11:14.276 | 1.00th=[ 578], 5.00th=[ 676], 10.00th=[ 750], 20.00th=[ 832], 00:11:14.276 | 30.00th=[ 889], 40.00th=[ 930], 50.00th=[ 947], 60.00th=[ 963], 00:11:14.276 | 70.00th=[ 979], 80.00th=[ 996], 90.00th=[ 1029], 95.00th=[ 1057], 00:11:14.276 | 99.00th=[ 1188], 99.50th=[ 1369], 99.90th=[41157], 99.95th=[42206], 00:11:14.276 | 99.99th=[42730] 00:11:14.276 bw ( KiB/s): min= 3440, max= 4264, per=58.70%, avg=3966.20, stdev=335.17, samples=5 00:11:14.276 iops : min= 860, max= 1066, avg=991.40, stdev=83.69, samples=5 00:11:14.276 lat (usec) : 500=0.10%, 750=10.15%, 1000=70.93% 00:11:14.276 lat (msec) : 2=18.61%, 10=0.03%, 20=0.03%, 50=0.10% 00:11:14.276 cpu : usr=1.49%, sys=4.14%, ctx=2910, majf=0, minf=1 00:11:14.276 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.276 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.276 issued rwts: total=2907,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.276 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.276 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1833785: Thu Nov 28 08:10:11 2024 00:11:14.276 read: IOPS=104, BW=417KiB/s (427kB/s)(1308KiB/3136msec) 00:11:14.276 slat (usec): min=6, max=7175, avg=91.39, stdev=631.41 00:11:14.276 clat (usec): min=236, max=50476, avg=9459.34, stdev=16993.16 00:11:14.276 lat (usec): min=243, max=50502, avg=9550.93, stdev=16973.26 00:11:14.276 clat percentiles (usec): 00:11:14.276 | 1.00th=[ 285], 5.00th=[ 322], 10.00th=[ 424], 20.00th=[ 553], 00:11:14.276 | 30.00th=[ 644], 40.00th=[ 717], 50.00th=[ 766], 60.00th=[ 816], 00:11:14.276 | 70.00th=[ 873], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:11:14.276 | 99.00th=[43254], 99.50th=[43254], 99.90th=[50594], 99.95th=[50594], 00:11:14.276 | 99.99th=[50594] 00:11:14.276 bw ( KiB/s): min= 96, max= 1988, per=6.08%, avg=411.33, stdev=772.41, samples=6 00:11:14.276 iops : min= 24, max= 497, avg=102.83, stdev=193.10, samples=6 00:11:14.276 lat (usec) : 250=0.30%, 500=14.02%, 750=32.01%, 1000=31.10% 00:11:14.276 lat (msec) : 2=0.91%, 10=0.30%, 50=20.73%, 100=0.30% 00:11:14.276 cpu : usr=0.03%, sys=0.38%, ctx=333, majf=0, minf=2 00:11:14.276 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.276 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.276 issued rwts: total=328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.276 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.276 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1833786: Thu Nov 28 08:10:11 2024 00:11:14.276 read: IOPS=486, BW=1944KiB/s (1991kB/s)(5352KiB/2753msec) 00:11:14.276 slat (nsec): min=6014, max=71858, avg=24663.10, stdev=7680.21 00:11:14.276 clat (usec): min=345, max=43009, avg=2010.79, stdev=7213.49 00:11:14.276 lat (usec): min=352, max=43036, avg=2035.45, stdev=7214.15 00:11:14.276 clat percentiles (usec): 00:11:14.276 | 1.00th=[ 457], 5.00th=[ 529], 10.00th=[ 570], 20.00th=[ 619], 00:11:14.276 | 30.00th=[ 668], 40.00th=[ 701], 50.00th=[ 725], 60.00th=[ 758], 00:11:14.276 | 70.00th=[ 791], 80.00th=[ 816], 90.00th=[ 848], 95.00th=[ 889], 00:11:14.276 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:11:14.276 | 99.99th=[43254] 00:11:14.276 bw ( KiB/s): min= 96, max= 5440, per=31.54%, avg=2131.20, stdev=2789.23, samples=5 00:11:14.276 iops : min= 24, max= 1360, avg=532.80, stdev=697.31, samples=5 00:11:14.276 lat (usec) : 500=3.73%, 750=53.10%, 1000=39.88% 00:11:14.276 lat (msec) : 2=0.07%, 50=3.14% 00:11:14.276 cpu : usr=0.40%, sys=2.14%, ctx=1341, majf=0, minf=2 00:11:14.276 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.276 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.276 issued rwts: total=1339,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.276 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.276 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1833787: Thu Nov 28 08:10:11 2024 00:11:14.276 read: IOPS=285, BW=1142KiB/s (1169kB/s)(2904KiB/2544msec) 00:11:14.276 slat (nsec): min=6445, max=61563, avg=25144.16, stdev=7204.94 00:11:14.276 clat (usec): min=380, max=45038, avg=3442.08, stdev=10146.52 00:11:14.276 lat (usec): min=387, max=45069, avg=3467.22, stdev=10146.92 00:11:14.276 clat percentiles (usec): 00:11:14.276 | 1.00th=[ 469], 5.00th=[ 562], 10.00th=[ 611], 20.00th=[ 676], 00:11:14.276 | 30.00th=[ 709], 40.00th=[ 750], 50.00th=[ 791], 60.00th=[ 824], 00:11:14.276 | 70.00th=[ 857], 80.00th=[ 930], 90.00th=[ 988], 95.00th=[41681], 00:11:14.276 | 99.00th=[42730], 99.50th=[43254], 99.90th=[44827], 99.95th=[44827], 00:11:14.276 | 99.99th=[44827] 00:11:14.276 bw ( KiB/s): min= 96, max= 4168, per=16.62%, avg=1123.20, stdev=1729.76, samples=5 00:11:14.276 iops : min= 24, max= 1042, avg=280.80, stdev=432.44, samples=5 00:11:14.276 lat (usec) : 500=2.06%, 750=38.51%, 1000=49.93% 00:11:14.276 lat (msec) : 2=2.75%, 4=0.14%, 50=6.46% 00:11:14.276 cpu : usr=0.35%, sys=1.14%, ctx=727, majf=0, minf=2 00:11:14.276 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.276 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.276 issued rwts: total=727,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.276 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.276 00:11:14.276 Run status group 0 (all jobs): 00:11:14.276 READ: bw=6756KiB/s (6919kB/s), 417KiB/s-3939KiB/s (427kB/s-4034kB/s), io=20.7MiB (21.7MB), run=2544-3136msec 00:11:14.276 00:11:14.276 Disk stats (read/write): 00:11:14.276 nvme0n1: ios=2776/0, merge=0/0, ticks=2549/0, in_queue=2549, util=92.05% 00:11:14.276 nvme0n2: ios=319/0, merge=0/0, ticks=2956/0, in_queue=2956, util=93.96% 00:11:14.276 nvme0n3: ios=1333/0, merge=0/0, ticks=2386/0, in_queue=2386, util=95.50% 00:11:14.276 nvme0n4: ios=726/0, merge=0/0, ticks=2446/0, in_queue=2446, util=96.34% 00:11:14.276 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:14.276 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:14.535 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:14.535 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:14.796 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:14.796 08:10:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:14.796 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:14.797 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:15.058 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:15.058 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1833590 00:11:15.058 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:15.058 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:15.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.058 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:15.058 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:15.058 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:15.058 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.058 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:15.058 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.058 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:15.058 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:15.058 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:15.058 nvmf hotplug test: fio failed as expected 00:11:15.058 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:15.320 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:15.320 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:15.320 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:15.320 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:15.320 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:15.320 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:15.320 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:15.320 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:15.320 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:15.320 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:15.320 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:15.320 rmmod nvme_tcp 00:11:15.320 rmmod nvme_fabrics 00:11:15.320 rmmod nvme_keyring 00:11:15.320 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:15.320 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:15.320 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:15.320 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1829839 ']' 00:11:15.320 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1829839 00:11:15.320 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1829839 ']' 00:11:15.320 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1829839 00:11:15.320 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:15.320 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:15.320 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1829839 00:11:15.581 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:15.581 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:15.581 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1829839' 00:11:15.581 killing process with pid 1829839 00:11:15.581 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1829839 00:11:15.581 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1829839 00:11:15.581 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:15.581 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:15.581 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:15.581 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:15.581 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:15.581 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:15.581 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:15.581 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:15.581 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:15.581 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.581 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:15.581 08:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.125 08:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:18.125 00:11:18.125 real 0m29.470s 00:11:18.125 user 2m38.061s 00:11:18.125 sys 0m9.655s 00:11:18.125 08:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.125 08:10:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.125 ************************************ 00:11:18.125 END TEST nvmf_fio_target 00:11:18.125 ************************************ 00:11:18.125 08:10:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:18.125 08:10:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:18.125 08:10:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.125 08:10:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:18.125 ************************************ 00:11:18.125 START TEST nvmf_bdevio 00:11:18.125 ************************************ 00:11:18.125 08:10:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:18.125 * Looking for test storage... 00:11:18.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:18.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.125 --rc genhtml_branch_coverage=1 00:11:18.125 --rc genhtml_function_coverage=1 00:11:18.125 --rc genhtml_legend=1 00:11:18.125 --rc geninfo_all_blocks=1 00:11:18.125 --rc geninfo_unexecuted_blocks=1 00:11:18.125 00:11:18.125 ' 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:18.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.125 --rc genhtml_branch_coverage=1 00:11:18.125 --rc genhtml_function_coverage=1 00:11:18.125 --rc genhtml_legend=1 00:11:18.125 --rc geninfo_all_blocks=1 00:11:18.125 --rc geninfo_unexecuted_blocks=1 00:11:18.125 00:11:18.125 ' 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:18.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.125 --rc genhtml_branch_coverage=1 00:11:18.125 --rc genhtml_function_coverage=1 00:11:18.125 --rc genhtml_legend=1 00:11:18.125 --rc geninfo_all_blocks=1 00:11:18.125 --rc geninfo_unexecuted_blocks=1 00:11:18.125 00:11:18.125 ' 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:18.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.125 --rc genhtml_branch_coverage=1 00:11:18.125 --rc genhtml_function_coverage=1 00:11:18.125 --rc genhtml_legend=1 00:11:18.125 --rc geninfo_all_blocks=1 00:11:18.125 --rc geninfo_unexecuted_blocks=1 00:11:18.125 00:11:18.125 ' 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:18.125 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.126 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.126 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.126 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:18.126 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.126 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:18.126 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:18.126 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:18.126 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:18.126 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:18.126 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:18.126 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:18.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:18.126 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:18.126 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:18.126 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:18.126 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:18.126 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:18.126 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:18.126 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:18.126 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:18.126 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:18.126 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:18.126 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:18.126 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.126 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:18.126 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.126 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:18.126 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:18.126 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:18.126 08:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:26.269 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:26.269 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:26.269 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:26.269 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:26.269 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:26.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:26.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:11:26.270 00:11:26.270 --- 10.0.0.2 ping statistics --- 00:11:26.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.270 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:26.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:26.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:11:26.270 00:11:26.270 --- 10.0.0.1 ping statistics --- 00:11:26.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.270 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1838833 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1838833 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1838833 ']' 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:26.270 08:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.270 [2024-11-28 08:10:22.630806] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:11:26.270 [2024-11-28 08:10:22.630881] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.270 [2024-11-28 08:10:22.730801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:26.270 [2024-11-28 08:10:22.783166] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.270 [2024-11-28 08:10:22.783223] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.270 [2024-11-28 08:10:22.783232] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.270 [2024-11-28 08:10:22.783240] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.270 [2024-11-28 08:10:22.783246] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.270 [2024-11-28 08:10:22.785330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:26.270 [2024-11-28 08:10:22.785599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:26.270 [2024-11-28 08:10:22.785667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:26.270 [2024-11-28 08:10:22.785669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:26.270 08:10:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:26.270 08:10:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:26.270 08:10:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:26.270 08:10:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:26.270 08:10:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.270 08:10:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.270 08:10:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:26.270 08:10:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.270 08:10:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.270 [2024-11-28 08:10:23.506670] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:26.270 08:10:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.270 08:10:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:26.270 08:10:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.270 08:10:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.270 Malloc0 00:11:26.270 08:10:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.270 08:10:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:26.270 08:10:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.270 08:10:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.533 08:10:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.533 08:10:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:26.533 08:10:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.533 08:10:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.533 08:10:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.533 08:10:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:26.533 08:10:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.533 08:10:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.533 [2024-11-28 08:10:23.584733] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:26.533 08:10:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.533 08:10:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:26.533 08:10:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:26.533 08:10:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:26.533 08:10:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:26.533 08:10:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:26.533 08:10:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:26.533 { 00:11:26.533 "params": { 00:11:26.533 "name": "Nvme$subsystem", 00:11:26.533 "trtype": "$TEST_TRANSPORT", 00:11:26.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:26.533 "adrfam": "ipv4", 00:11:26.533 "trsvcid": "$NVMF_PORT", 00:11:26.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:26.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:26.533 "hdgst": ${hdgst:-false}, 00:11:26.533 "ddgst": ${ddgst:-false} 00:11:26.533 }, 00:11:26.533 "method": "bdev_nvme_attach_controller" 00:11:26.533 } 00:11:26.533 EOF 00:11:26.533 )") 00:11:26.533 08:10:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:26.533 08:10:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:26.533 08:10:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:26.533 08:10:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:26.533 "params": { 00:11:26.533 "name": "Nvme1", 00:11:26.533 "trtype": "tcp", 00:11:26.533 "traddr": "10.0.0.2", 00:11:26.533 "adrfam": "ipv4", 00:11:26.533 "trsvcid": "4420", 00:11:26.533 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:26.533 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:26.533 "hdgst": false, 00:11:26.533 "ddgst": false 00:11:26.533 }, 00:11:26.533 "method": "bdev_nvme_attach_controller" 00:11:26.533 }' 00:11:26.533 [2024-11-28 08:10:23.641919] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:11:26.533 [2024-11-28 08:10:23.641984] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1839179 ] 00:11:26.533 [2024-11-28 08:10:23.735467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:26.533 [2024-11-28 08:10:23.791979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.533 [2024-11-28 08:10:23.792144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.533 [2024-11-28 08:10:23.792144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:27.107 I/O targets: 00:11:27.107 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:27.107 00:11:27.107 00:11:27.107 CUnit - A unit testing framework for C - Version 2.1-3 00:11:27.107 http://cunit.sourceforge.net/ 00:11:27.107 00:11:27.107 00:11:27.107 Suite: bdevio tests on: Nvme1n1 00:11:27.107 Test: blockdev write read block ...passed 00:11:27.107 Test: blockdev write zeroes read block ...passed 00:11:27.107 Test: blockdev write zeroes read no split ...passed 00:11:27.107 Test: blockdev write zeroes read split ...passed 00:11:27.107 Test: blockdev write zeroes read split partial ...passed 00:11:27.107 Test: blockdev reset ...[2024-11-28 08:10:24.305141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:27.107 [2024-11-28 08:10:24.305251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5c970 (9): Bad file descriptor 00:11:27.107 [2024-11-28 08:10:24.365440] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:27.107 passed 00:11:27.107 Test: blockdev write read 8 blocks ...passed 00:11:27.107 Test: blockdev write read size > 128k ...passed 00:11:27.107 Test: blockdev write read invalid size ...passed 00:11:27.368 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:27.368 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:27.368 Test: blockdev write read max offset ...passed 00:11:27.368 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:27.368 Test: blockdev writev readv 8 blocks ...passed 00:11:27.368 Test: blockdev writev readv 30 x 1block ...passed 00:11:27.368 Test: blockdev writev readv block ...passed 00:11:27.368 Test: blockdev writev readv size > 128k ...passed 00:11:27.368 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:27.368 Test: blockdev comparev and writev ...[2024-11-28 08:10:24.588668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:27.368 [2024-11-28 08:10:24.588703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:27.368 [2024-11-28 08:10:24.588719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:27.368 [2024-11-28 08:10:24.588727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:27.368 [2024-11-28 08:10:24.589203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:27.368 [2024-11-28 08:10:24.589216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:27.368 [2024-11-28 08:10:24.589231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:27.368 [2024-11-28 08:10:24.589240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:27.368 [2024-11-28 08:10:24.589711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:27.368 [2024-11-28 08:10:24.589723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:27.368 [2024-11-28 08:10:24.589737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:27.368 [2024-11-28 08:10:24.589745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:27.368 [2024-11-28 08:10:24.590212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:27.368 [2024-11-28 08:10:24.590225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:27.368 [2024-11-28 08:10:24.590239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:27.368 [2024-11-28 08:10:24.590246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:27.368 passed 00:11:27.629 Test: blockdev nvme passthru rw ...passed 00:11:27.629 Test: blockdev nvme passthru vendor specific ...[2024-11-28 08:10:24.674963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:27.629 [2024-11-28 08:10:24.674984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:27.629 [2024-11-28 08:10:24.675328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:27.629 [2024-11-28 08:10:24.675340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:27.629 [2024-11-28 08:10:24.675697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:27.629 [2024-11-28 08:10:24.675708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:27.629 [2024-11-28 08:10:24.676068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:27.629 [2024-11-28 08:10:24.676080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:27.629 passed 00:11:27.629 Test: blockdev nvme admin passthru ...passed 00:11:27.629 Test: blockdev copy ...passed 00:11:27.629 00:11:27.629 Run Summary: Type Total Ran Passed Failed Inactive 00:11:27.629 suites 1 1 n/a 0 0 00:11:27.629 tests 23 23 23 0 0 00:11:27.629 asserts 152 152 152 0 n/a 00:11:27.629 00:11:27.629 Elapsed time = 1.207 seconds 00:11:27.629 08:10:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:27.629 08:10:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.629 08:10:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:27.630 08:10:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.630 08:10:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:27.630 08:10:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:27.630 08:10:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:27.630 08:10:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:27.630 08:10:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:27.630 08:10:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:27.630 08:10:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:27.630 08:10:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:27.630 rmmod nvme_tcp 00:11:27.630 rmmod nvme_fabrics 00:11:27.630 rmmod nvme_keyring 00:11:27.630 08:10:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:27.630 08:10:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:27.630 08:10:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:27.630 08:10:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1838833 ']' 00:11:27.630 08:10:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1838833 00:11:27.630 08:10:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1838833 ']' 00:11:27.630 08:10:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1838833 00:11:27.890 08:10:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:27.890 08:10:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:27.890 08:10:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1838833 00:11:27.890 08:10:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:27.890 08:10:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:27.890 08:10:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1838833' 00:11:27.890 killing process with pid 1838833 00:11:27.890 08:10:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1838833 00:11:27.890 08:10:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1838833 00:11:27.890 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:27.890 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:27.890 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:27.890 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:27.890 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:27.890 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:27.890 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:27.890 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:27.890 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:27.890 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.890 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.890 08:10:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.439 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:30.439 00:11:30.439 real 0m12.274s 00:11:30.439 user 0m14.008s 00:11:30.439 sys 0m6.201s 00:11:30.439 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:30.439 08:10:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:30.439 ************************************ 00:11:30.439 END TEST nvmf_bdevio 00:11:30.439 ************************************ 00:11:30.439 08:10:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:30.439 00:11:30.439 real 5m6.003s 00:11:30.439 user 11m56.321s 00:11:30.439 sys 1m53.188s 00:11:30.439 08:10:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:30.439 08:10:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:30.439 ************************************ 00:11:30.439 END TEST nvmf_target_core 00:11:30.439 ************************************ 00:11:30.439 08:10:27 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:30.439 08:10:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:30.439 08:10:27 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:30.439 08:10:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:30.439 ************************************ 00:11:30.439 START TEST nvmf_target_extra 00:11:30.439 ************************************ 00:11:30.439 08:10:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:30.439 * Looking for test storage... 00:11:30.439 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:30.439 08:10:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:30.439 08:10:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:11:30.439 08:10:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:30.439 08:10:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:30.439 08:10:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:30.439 08:10:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:30.439 08:10:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:30.439 08:10:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:30.439 08:10:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:30.439 08:10:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:30.439 08:10:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:30.439 08:10:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:30.439 08:10:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:30.439 08:10:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:30.439 08:10:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:30.439 08:10:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:30.439 08:10:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:30.439 08:10:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:30.439 08:10:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:30.439 08:10:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:30.439 08:10:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:30.439 08:10:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:30.439 08:10:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:30.439 08:10:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:30.439 08:10:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:30.439 08:10:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:30.439 08:10:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:30.439 08:10:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:30.439 08:10:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:30.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.440 --rc genhtml_branch_coverage=1 00:11:30.440 --rc genhtml_function_coverage=1 00:11:30.440 --rc genhtml_legend=1 00:11:30.440 --rc geninfo_all_blocks=1 00:11:30.440 --rc geninfo_unexecuted_blocks=1 00:11:30.440 00:11:30.440 ' 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:30.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.440 --rc genhtml_branch_coverage=1 00:11:30.440 --rc genhtml_function_coverage=1 00:11:30.440 --rc genhtml_legend=1 00:11:30.440 --rc geninfo_all_blocks=1 00:11:30.440 --rc geninfo_unexecuted_blocks=1 00:11:30.440 00:11:30.440 ' 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:30.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.440 --rc genhtml_branch_coverage=1 00:11:30.440 --rc genhtml_function_coverage=1 00:11:30.440 --rc genhtml_legend=1 00:11:30.440 --rc geninfo_all_blocks=1 00:11:30.440 --rc geninfo_unexecuted_blocks=1 00:11:30.440 00:11:30.440 ' 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:30.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.440 --rc genhtml_branch_coverage=1 00:11:30.440 --rc genhtml_function_coverage=1 00:11:30.440 --rc genhtml_legend=1 00:11:30.440 --rc geninfo_all_blocks=1 00:11:30.440 --rc geninfo_unexecuted_blocks=1 00:11:30.440 00:11:30.440 ' 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:30.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:30.440 ************************************ 00:11:30.440 START TEST nvmf_example 00:11:30.440 ************************************ 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:30.440 * Looking for test storage... 00:11:30.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:11:30.440 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:30.702 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:30.702 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:30.702 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:30.702 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:30.702 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:30.702 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:30.702 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:30.702 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:30.702 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:30.702 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:30.702 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:30.702 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:30.702 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:30.702 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:30.702 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:30.702 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:30.702 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:30.702 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:30.702 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:30.702 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:30.702 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:30.702 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:30.702 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:30.702 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:30.702 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:30.702 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:30.702 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:30.702 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:30.702 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:30.702 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:30.702 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:30.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.702 --rc genhtml_branch_coverage=1 00:11:30.702 --rc genhtml_function_coverage=1 00:11:30.702 --rc genhtml_legend=1 00:11:30.702 --rc geninfo_all_blocks=1 00:11:30.702 --rc geninfo_unexecuted_blocks=1 00:11:30.702 00:11:30.702 ' 00:11:30.702 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:30.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.702 --rc genhtml_branch_coverage=1 00:11:30.702 --rc genhtml_function_coverage=1 00:11:30.702 --rc genhtml_legend=1 00:11:30.702 --rc geninfo_all_blocks=1 00:11:30.702 --rc geninfo_unexecuted_blocks=1 00:11:30.702 00:11:30.702 ' 00:11:30.702 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:30.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.702 --rc genhtml_branch_coverage=1 00:11:30.702 --rc genhtml_function_coverage=1 00:11:30.702 --rc genhtml_legend=1 00:11:30.702 --rc geninfo_all_blocks=1 00:11:30.702 --rc geninfo_unexecuted_blocks=1 00:11:30.702 00:11:30.702 ' 00:11:30.702 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:30.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.702 --rc genhtml_branch_coverage=1 00:11:30.702 --rc genhtml_function_coverage=1 00:11:30.702 --rc genhtml_legend=1 00:11:30.702 --rc geninfo_all_blocks=1 00:11:30.702 --rc geninfo_unexecuted_blocks=1 00:11:30.703 00:11:30.703 ' 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:30.703 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:30.703 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:38.865 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:38.865 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:38.866 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:38.866 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:38.866 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:38.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:38.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:11:38.866 00:11:38.866 --- 10.0.0.2 ping statistics --- 00:11:38.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.866 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:38.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:38.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:11:38.866 00:11:38.866 --- 10.0.0.1 ping statistics --- 00:11:38.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.866 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1843792 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1843792 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1843792 ']' 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:38.866 08:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:39.129 08:10:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:39.129 08:10:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:39.129 08:10:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:39.129 08:10:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:39.129 08:10:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:39.129 08:10:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:39.129 08:10:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.129 08:10:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:39.129 08:10:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.129 08:10:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:39.129 08:10:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.129 08:10:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:39.129 08:10:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.129 08:10:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:39.129 08:10:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:39.129 08:10:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.129 08:10:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:39.129 08:10:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.129 08:10:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:39.129 08:10:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:39.129 08:10:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.129 08:10:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:39.129 08:10:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.129 08:10:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:39.129 08:10:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.129 08:10:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:39.129 08:10:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.129 08:10:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:39.129 08:10:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:51.368 Initializing NVMe Controllers 00:11:51.368 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:51.368 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:51.368 Initialization complete. Launching workers. 00:11:51.368 ======================================================== 00:11:51.368 Latency(us) 00:11:51.368 Device Information : IOPS MiB/s Average min max 00:11:51.368 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18896.99 73.82 3387.26 627.79 45233.95 00:11:51.368 ======================================================== 00:11:51.368 Total : 18896.99 73.82 3387.26 627.79 45233.95 00:11:51.368 00:11:51.368 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:51.368 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:51.368 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:51.368 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:51.368 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:51.368 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:51.368 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:51.368 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:51.368 rmmod nvme_tcp 00:11:51.368 rmmod nvme_fabrics 00:11:51.368 rmmod nvme_keyring 00:11:51.368 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:51.368 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:51.368 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:51.368 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1843792 ']' 00:11:51.368 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1843792 00:11:51.368 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1843792 ']' 00:11:51.368 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1843792 00:11:51.368 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:51.368 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:51.368 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1843792 00:11:51.368 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:51.368 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:51.368 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1843792' 00:11:51.368 killing process with pid 1843792 00:11:51.368 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1843792 00:11:51.368 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1843792 00:11:51.368 nvmf threads initialize successfully 00:11:51.368 bdev subsystem init successfully 00:11:51.368 created a nvmf target service 00:11:51.368 create targets's poll groups done 00:11:51.368 all subsystems of target started 00:11:51.368 nvmf target is running 00:11:51.368 all subsystems of target stopped 00:11:51.368 destroy targets's poll groups done 00:11:51.368 destroyed the nvmf target service 00:11:51.368 bdev subsystem finish successfully 00:11:51.368 nvmf threads destroy successfully 00:11:51.368 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:51.368 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:51.368 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:51.368 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:51.368 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:51.368 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:51.368 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:51.368 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:51.368 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:51.368 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.368 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:51.368 08:10:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.941 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:51.941 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:51.941 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:51.941 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:51.941 00:11:51.941 real 0m21.445s 00:11:51.941 user 0m46.633s 00:11:51.941 sys 0m6.993s 00:11:51.941 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.941 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:51.941 ************************************ 00:11:51.941 END TEST nvmf_example 00:11:51.941 ************************************ 00:11:51.941 08:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:51.941 08:10:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:51.941 08:10:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.941 08:10:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:51.941 ************************************ 00:11:51.941 START TEST nvmf_filesystem 00:11:51.941 ************************************ 00:11:51.941 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:51.941 * Looking for test storage... 00:11:51.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:51.941 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:51.941 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:51.941 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:52.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.207 --rc genhtml_branch_coverage=1 00:11:52.207 --rc genhtml_function_coverage=1 00:11:52.207 --rc genhtml_legend=1 00:11:52.207 --rc geninfo_all_blocks=1 00:11:52.207 --rc geninfo_unexecuted_blocks=1 00:11:52.207 00:11:52.207 ' 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:52.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.207 --rc genhtml_branch_coverage=1 00:11:52.207 --rc genhtml_function_coverage=1 00:11:52.207 --rc genhtml_legend=1 00:11:52.207 --rc geninfo_all_blocks=1 00:11:52.207 --rc geninfo_unexecuted_blocks=1 00:11:52.207 00:11:52.207 ' 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:52.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.207 --rc genhtml_branch_coverage=1 00:11:52.207 --rc genhtml_function_coverage=1 00:11:52.207 --rc genhtml_legend=1 00:11:52.207 --rc geninfo_all_blocks=1 00:11:52.207 --rc geninfo_unexecuted_blocks=1 00:11:52.207 00:11:52.207 ' 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:52.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.207 --rc genhtml_branch_coverage=1 00:11:52.207 --rc genhtml_function_coverage=1 00:11:52.207 --rc genhtml_legend=1 00:11:52.207 --rc geninfo_all_blocks=1 00:11:52.207 --rc geninfo_unexecuted_blocks=1 00:11:52.207 00:11:52.207 ' 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:52.207 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:52.208 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:52.208 #define SPDK_CONFIG_H 00:11:52.208 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:52.208 #define SPDK_CONFIG_APPS 1 00:11:52.208 #define SPDK_CONFIG_ARCH native 00:11:52.208 #undef SPDK_CONFIG_ASAN 00:11:52.208 #undef SPDK_CONFIG_AVAHI 00:11:52.208 #undef SPDK_CONFIG_CET 00:11:52.208 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:52.208 #define SPDK_CONFIG_COVERAGE 1 00:11:52.208 #define SPDK_CONFIG_CROSS_PREFIX 00:11:52.208 #undef SPDK_CONFIG_CRYPTO 00:11:52.208 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:52.208 #undef SPDK_CONFIG_CUSTOMOCF 00:11:52.208 #undef SPDK_CONFIG_DAOS 00:11:52.208 #define SPDK_CONFIG_DAOS_DIR 00:11:52.208 #define SPDK_CONFIG_DEBUG 1 00:11:52.209 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:52.209 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:52.209 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:52.209 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:52.209 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:52.209 #undef SPDK_CONFIG_DPDK_UADK 00:11:52.209 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:52.209 #define SPDK_CONFIG_EXAMPLES 1 00:11:52.209 #undef SPDK_CONFIG_FC 00:11:52.209 #define SPDK_CONFIG_FC_PATH 00:11:52.209 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:52.209 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:52.209 #define SPDK_CONFIG_FSDEV 1 00:11:52.209 #undef SPDK_CONFIG_FUSE 00:11:52.209 #undef SPDK_CONFIG_FUZZER 00:11:52.209 #define SPDK_CONFIG_FUZZER_LIB 00:11:52.209 #undef SPDK_CONFIG_GOLANG 00:11:52.209 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:52.209 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:52.209 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:52.209 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:52.209 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:52.209 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:52.209 #undef SPDK_CONFIG_HAVE_LZ4 00:11:52.209 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:52.209 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:52.209 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:52.209 #define SPDK_CONFIG_IDXD 1 00:11:52.209 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:52.209 #undef SPDK_CONFIG_IPSEC_MB 00:11:52.209 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:52.209 #define SPDK_CONFIG_ISAL 1 00:11:52.209 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:52.209 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:52.209 #define SPDK_CONFIG_LIBDIR 00:11:52.209 #undef SPDK_CONFIG_LTO 00:11:52.209 #define SPDK_CONFIG_MAX_LCORES 128 00:11:52.209 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:52.209 #define SPDK_CONFIG_NVME_CUSE 1 00:11:52.209 #undef SPDK_CONFIG_OCF 00:11:52.209 #define SPDK_CONFIG_OCF_PATH 00:11:52.209 #define SPDK_CONFIG_OPENSSL_PATH 00:11:52.209 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:52.209 #define SPDK_CONFIG_PGO_DIR 00:11:52.209 #undef SPDK_CONFIG_PGO_USE 00:11:52.209 #define SPDK_CONFIG_PREFIX /usr/local 00:11:52.209 #undef SPDK_CONFIG_RAID5F 00:11:52.209 #undef SPDK_CONFIG_RBD 00:11:52.209 #define SPDK_CONFIG_RDMA 1 00:11:52.209 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:52.209 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:52.209 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:52.209 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:52.209 #define SPDK_CONFIG_SHARED 1 00:11:52.209 #undef SPDK_CONFIG_SMA 00:11:52.209 #define SPDK_CONFIG_TESTS 1 00:11:52.209 #undef SPDK_CONFIG_TSAN 00:11:52.209 #define SPDK_CONFIG_UBLK 1 00:11:52.209 #define SPDK_CONFIG_UBSAN 1 00:11:52.209 #undef SPDK_CONFIG_UNIT_TESTS 00:11:52.209 #undef SPDK_CONFIG_URING 00:11:52.209 #define SPDK_CONFIG_URING_PATH 00:11:52.209 #undef SPDK_CONFIG_URING_ZNS 00:11:52.209 #undef SPDK_CONFIG_USDT 00:11:52.209 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:52.209 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:52.209 #define SPDK_CONFIG_VFIO_USER 1 00:11:52.209 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:52.209 #define SPDK_CONFIG_VHOST 1 00:11:52.209 #define SPDK_CONFIG_VIRTIO 1 00:11:52.209 #undef SPDK_CONFIG_VTUNE 00:11:52.209 #define SPDK_CONFIG_VTUNE_DIR 00:11:52.209 #define SPDK_CONFIG_WERROR 1 00:11:52.209 #define SPDK_CONFIG_WPDK_DIR 00:11:52.209 #undef SPDK_CONFIG_XNVME 00:11:52.209 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:52.209 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:52.210 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:52.211 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1846568 ]] 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1846568 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.0GoHXD 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.0GoHXD/tests/target /tmp/spdk.0GoHXD 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=118239612928 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356509184 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11116896256 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64666886144 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678252544 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847934976 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871302656 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23367680 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=216064 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=287744 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677060608 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678256640 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=1196032 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935634944 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935647232 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:52.212 * Looking for test storage... 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=118239612928 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=13331488768 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:52.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:52.212 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:52.213 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:52.213 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:52.213 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:52.213 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:52.213 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:52.213 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:52.213 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:52.213 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:52.213 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:52.213 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:52.213 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:52.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.475 --rc genhtml_branch_coverage=1 00:11:52.475 --rc genhtml_function_coverage=1 00:11:52.475 --rc genhtml_legend=1 00:11:52.475 --rc geninfo_all_blocks=1 00:11:52.475 --rc geninfo_unexecuted_blocks=1 00:11:52.475 00:11:52.475 ' 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:52.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.475 --rc genhtml_branch_coverage=1 00:11:52.475 --rc genhtml_function_coverage=1 00:11:52.475 --rc genhtml_legend=1 00:11:52.475 --rc geninfo_all_blocks=1 00:11:52.475 --rc geninfo_unexecuted_blocks=1 00:11:52.475 00:11:52.475 ' 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:52.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.475 --rc genhtml_branch_coverage=1 00:11:52.475 --rc genhtml_function_coverage=1 00:11:52.475 --rc genhtml_legend=1 00:11:52.475 --rc geninfo_all_blocks=1 00:11:52.475 --rc geninfo_unexecuted_blocks=1 00:11:52.475 00:11:52.475 ' 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:52.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:52.475 --rc genhtml_branch_coverage=1 00:11:52.475 --rc genhtml_function_coverage=1 00:11:52.475 --rc genhtml_legend=1 00:11:52.475 --rc geninfo_all_blocks=1 00:11:52.475 --rc geninfo_unexecuted_blocks=1 00:11:52.475 00:11:52.475 ' 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:52.475 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.476 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.476 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.476 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:52.476 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:52.476 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:52.476 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:52.476 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:52.476 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:52.476 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:52.476 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:52.476 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:52.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:52.476 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:52.476 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:52.476 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:52.476 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:52.476 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:52.476 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:52.476 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:52.476 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:52.476 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:52.476 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:52.476 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:52.476 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.476 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.476 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:52.476 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:52.476 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:52.476 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:52.476 08:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:00.626 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.626 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:00.627 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:00.627 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:00.627 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:00.627 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:00.627 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:00.627 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:00.627 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:00.627 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:00.627 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:00.627 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:00.627 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:00.627 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:00.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:00.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:12:00.627 00:12:00.627 --- 10.0.0.2 ping statistics --- 00:12:00.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.627 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:12:00.627 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:00.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:00.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:12:00.627 00:12:00.627 --- 10.0.0.1 ping statistics --- 00:12:00.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.627 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:12:00.627 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:00.627 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:12:00.627 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:00.627 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:00.627 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:00.627 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:00.627 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:00.627 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:00.627 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:00.627 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:00.627 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:00.627 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.627 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:00.627 ************************************ 00:12:00.627 START TEST nvmf_filesystem_no_in_capsule 00:12:00.627 ************************************ 00:12:00.627 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:12:00.627 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:00.627 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:00.627 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:00.628 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:00.628 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.628 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1850338 00:12:00.628 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1850338 00:12:00.628 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:00.628 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1850338 ']' 00:12:00.628 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.628 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:00.628 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.628 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:00.628 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.628 [2024-11-28 08:10:57.337560] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:12:00.628 [2024-11-28 08:10:57.337620] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.628 [2024-11-28 08:10:57.438622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:00.628 [2024-11-28 08:10:57.491697] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.628 [2024-11-28 08:10:57.491754] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.628 [2024-11-28 08:10:57.491763] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:00.628 [2024-11-28 08:10:57.491771] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:00.628 [2024-11-28 08:10:57.491777] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.628 [2024-11-28 08:10:57.493829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.628 [2024-11-28 08:10:57.493994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:00.628 [2024-11-28 08:10:57.494156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.628 [2024-11-28 08:10:57.494156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:00.889 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:00.889 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:00.889 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:00.890 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:00.890 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.152 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:01.152 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:01.152 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:01.152 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.152 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.152 [2024-11-28 08:10:58.215752] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:01.152 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.152 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:01.152 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.152 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.152 Malloc1 00:12:01.152 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.152 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:01.152 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.152 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.152 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.152 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:01.152 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.152 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.152 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.152 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.152 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.152 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.152 [2024-11-28 08:10:58.384505] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.152 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.152 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:01.152 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:01.152 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:01.152 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:01.152 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:01.152 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:01.152 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.152 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.152 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.152 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:01.152 { 00:12:01.152 "name": "Malloc1", 00:12:01.152 "aliases": [ 00:12:01.152 "d4c515ff-4efa-4637-a3d8-2888dee57197" 00:12:01.152 ], 00:12:01.152 "product_name": "Malloc disk", 00:12:01.152 "block_size": 512, 00:12:01.152 "num_blocks": 1048576, 00:12:01.152 "uuid": "d4c515ff-4efa-4637-a3d8-2888dee57197", 00:12:01.152 "assigned_rate_limits": { 00:12:01.152 "rw_ios_per_sec": 0, 00:12:01.152 "rw_mbytes_per_sec": 0, 00:12:01.152 "r_mbytes_per_sec": 0, 00:12:01.152 "w_mbytes_per_sec": 0 00:12:01.152 }, 00:12:01.152 "claimed": true, 00:12:01.152 "claim_type": "exclusive_write", 00:12:01.152 "zoned": false, 00:12:01.152 "supported_io_types": { 00:12:01.152 "read": true, 00:12:01.152 "write": true, 00:12:01.152 "unmap": true, 00:12:01.152 "flush": true, 00:12:01.152 "reset": true, 00:12:01.152 "nvme_admin": false, 00:12:01.152 "nvme_io": false, 00:12:01.152 "nvme_io_md": false, 00:12:01.152 "write_zeroes": true, 00:12:01.152 "zcopy": true, 00:12:01.152 "get_zone_info": false, 00:12:01.152 "zone_management": false, 00:12:01.152 "zone_append": false, 00:12:01.152 "compare": false, 00:12:01.152 "compare_and_write": false, 00:12:01.152 "abort": true, 00:12:01.152 "seek_hole": false, 00:12:01.152 "seek_data": false, 00:12:01.152 "copy": true, 00:12:01.152 "nvme_iov_md": false 00:12:01.152 }, 00:12:01.152 "memory_domains": [ 00:12:01.152 { 00:12:01.152 "dma_device_id": "system", 00:12:01.152 "dma_device_type": 1 00:12:01.152 }, 00:12:01.152 { 00:12:01.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.152 "dma_device_type": 2 00:12:01.152 } 00:12:01.152 ], 00:12:01.152 "driver_specific": {} 00:12:01.152 } 00:12:01.152 ]' 00:12:01.153 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:01.415 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:01.415 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:01.415 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:01.415 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:01.415 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:01.415 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:01.415 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:02.802 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:02.802 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:02.802 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:02.802 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:02.802 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:05.351 08:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:05.351 08:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:05.351 08:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:05.351 08:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:05.351 08:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:05.351 08:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:05.351 08:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:05.351 08:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:05.351 08:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:05.351 08:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:05.351 08:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:05.351 08:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:05.351 08:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:05.351 08:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:05.351 08:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:05.351 08:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:05.351 08:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:05.351 08:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:05.614 08:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:07.029 08:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:07.029 08:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:07.029 08:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:07.029 08:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.029 08:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.029 ************************************ 00:12:07.029 START TEST filesystem_ext4 00:12:07.029 ************************************ 00:12:07.029 08:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:07.029 08:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:07.029 08:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:07.029 08:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:07.029 08:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:07.029 08:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:07.029 08:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:07.029 08:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:07.029 08:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:07.029 08:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:07.029 08:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:07.029 mke2fs 1.47.0 (5-Feb-2023) 00:12:07.029 Discarding device blocks: 0/522240 done 00:12:07.029 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:07.029 Filesystem UUID: ea285b27-d556-4703-9841-9552673bceba 00:12:07.029 Superblock backups stored on blocks: 00:12:07.029 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:07.029 00:12:07.029 Allocating group tables: 0/64 done 00:12:07.029 Writing inode tables: 0/64 done 00:12:07.029 Creating journal (8192 blocks): done 00:12:07.029 Writing superblocks and filesystem accounting information: 0/64 done 00:12:07.029 00:12:07.029 08:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:07.029 08:11:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:12.334 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:12.334 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:12.334 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:12.334 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:12.334 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:12.334 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:12.334 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1850338 00:12:12.334 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:12.334 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:12.334 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:12.334 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:12.334 00:12:12.334 real 0m5.679s 00:12:12.334 user 0m0.020s 00:12:12.334 sys 0m0.087s 00:12:12.334 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.334 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:12.334 ************************************ 00:12:12.334 END TEST filesystem_ext4 00:12:12.334 ************************************ 00:12:12.596 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:12.596 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:12.596 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.596 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.596 ************************************ 00:12:12.596 START TEST filesystem_btrfs 00:12:12.596 ************************************ 00:12:12.596 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:12.596 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:12.596 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:12.596 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:12.596 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:12.596 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:12.596 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:12.596 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:12.596 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:12.596 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:12.596 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:13.169 btrfs-progs v6.8.1 00:12:13.169 See https://btrfs.readthedocs.io for more information. 00:12:13.169 00:12:13.169 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:13.169 NOTE: several default settings have changed in version 5.15, please make sure 00:12:13.169 this does not affect your deployments: 00:12:13.170 - DUP for metadata (-m dup) 00:12:13.170 - enabled no-holes (-O no-holes) 00:12:13.170 - enabled free-space-tree (-R free-space-tree) 00:12:13.170 00:12:13.170 Label: (null) 00:12:13.170 UUID: 802d6c3d-32a5-4596-97b4-19698694abb9 00:12:13.170 Node size: 16384 00:12:13.170 Sector size: 4096 (CPU page size: 4096) 00:12:13.170 Filesystem size: 510.00MiB 00:12:13.170 Block group profiles: 00:12:13.170 Data: single 8.00MiB 00:12:13.170 Metadata: DUP 32.00MiB 00:12:13.170 System: DUP 8.00MiB 00:12:13.170 SSD detected: yes 00:12:13.170 Zoned device: no 00:12:13.170 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:13.170 Checksum: crc32c 00:12:13.170 Number of devices: 1 00:12:13.170 Devices: 00:12:13.170 ID SIZE PATH 00:12:13.170 1 510.00MiB /dev/nvme0n1p1 00:12:13.170 00:12:13.170 08:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:13.170 08:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:13.170 08:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:13.170 08:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:13.170 08:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:13.170 08:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:13.170 08:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:13.170 08:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:13.170 08:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1850338 00:12:13.170 08:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:13.170 08:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:13.170 08:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:13.170 08:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:13.170 00:12:13.170 real 0m0.721s 00:12:13.170 user 0m0.026s 00:12:13.170 sys 0m0.123s 00:12:13.170 08:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:13.170 08:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:13.170 ************************************ 00:12:13.170 END TEST filesystem_btrfs 00:12:13.170 ************************************ 00:12:13.170 08:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:13.170 08:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:13.170 08:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:13.170 08:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.431 ************************************ 00:12:13.431 START TEST filesystem_xfs 00:12:13.431 ************************************ 00:12:13.431 08:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:13.431 08:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:13.431 08:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:13.431 08:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:13.431 08:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:13.431 08:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:13.431 08:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:13.431 08:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:13.431 08:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:13.431 08:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:13.431 08:11:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:13.431 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:13.431 = sectsz=512 attr=2, projid32bit=1 00:12:13.431 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:13.431 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:13.431 data = bsize=4096 blocks=130560, imaxpct=25 00:12:13.431 = sunit=0 swidth=0 blks 00:12:13.431 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:13.431 log =internal log bsize=4096 blocks=16384, version=2 00:12:13.431 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:13.431 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:14.376 Discarding blocks...Done. 00:12:14.376 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:14.376 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:16.293 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:16.293 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:16.293 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:16.293 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:16.293 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:16.293 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:16.293 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1850338 00:12:16.293 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:16.293 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:16.293 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:16.293 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:16.293 00:12:16.293 real 0m2.792s 00:12:16.293 user 0m0.030s 00:12:16.293 sys 0m0.076s 00:12:16.293 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.293 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:16.293 ************************************ 00:12:16.293 END TEST filesystem_xfs 00:12:16.293 ************************************ 00:12:16.293 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:16.293 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:16.554 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:16.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.554 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:16.554 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:16.554 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:16.554 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:16.554 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:16.554 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:16.815 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:16.815 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.815 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.815 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.815 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.815 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:16.815 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1850338 00:12:16.815 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1850338 ']' 00:12:16.815 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1850338 00:12:16.815 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:16.815 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:16.815 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1850338 00:12:16.815 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:16.815 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:16.815 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1850338' 00:12:16.815 killing process with pid 1850338 00:12:16.815 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1850338 00:12:16.815 08:11:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1850338 00:12:17.077 08:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:17.077 00:12:17.077 real 0m16.858s 00:12:17.077 user 1m6.465s 00:12:17.077 sys 0m1.478s 00:12:17.077 08:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.077 08:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.077 ************************************ 00:12:17.077 END TEST nvmf_filesystem_no_in_capsule 00:12:17.077 ************************************ 00:12:17.077 08:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:17.077 08:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:17.077 08:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:17.077 08:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:17.077 ************************************ 00:12:17.077 START TEST nvmf_filesystem_in_capsule 00:12:17.077 ************************************ 00:12:17.077 08:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:17.077 08:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:17.077 08:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:17.077 08:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:17.077 08:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:17.077 08:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.077 08:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1853921 00:12:17.077 08:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1853921 00:12:17.077 08:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:17.077 08:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1853921 ']' 00:12:17.077 08:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.077 08:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:17.077 08:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.077 08:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:17.077 08:11:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.077 [2024-11-28 08:11:14.283069] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:12:17.077 [2024-11-28 08:11:14.283130] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:17.337 [2024-11-28 08:11:14.373709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:17.337 [2024-11-28 08:11:14.404137] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:17.337 [2024-11-28 08:11:14.404169] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:17.337 [2024-11-28 08:11:14.404175] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:17.337 [2024-11-28 08:11:14.404180] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:17.337 [2024-11-28 08:11:14.404184] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:17.337 [2024-11-28 08:11:14.405653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:17.337 [2024-11-28 08:11:14.405803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:17.337 [2024-11-28 08:11:14.405953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.337 [2024-11-28 08:11:14.405956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:17.908 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:17.908 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:17.908 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:17.908 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:17.908 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.908 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:17.908 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:17.908 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:17.908 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.908 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.908 [2024-11-28 08:11:15.119763] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:17.908 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.908 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:17.908 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.908 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.169 Malloc1 00:12:18.169 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.169 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:18.169 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.169 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.169 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.169 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:18.169 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.169 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.169 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.169 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:18.169 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.169 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.169 [2024-11-28 08:11:15.264366] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:18.169 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.169 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:18.169 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:18.169 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:18.169 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:18.169 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:18.169 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:18.169 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.169 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:18.169 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.169 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:18.169 { 00:12:18.169 "name": "Malloc1", 00:12:18.169 "aliases": [ 00:12:18.169 "4df5f308-6059-42bf-a98f-fd0a0570beba" 00:12:18.169 ], 00:12:18.169 "product_name": "Malloc disk", 00:12:18.169 "block_size": 512, 00:12:18.169 "num_blocks": 1048576, 00:12:18.169 "uuid": "4df5f308-6059-42bf-a98f-fd0a0570beba", 00:12:18.169 "assigned_rate_limits": { 00:12:18.169 "rw_ios_per_sec": 0, 00:12:18.169 "rw_mbytes_per_sec": 0, 00:12:18.169 "r_mbytes_per_sec": 0, 00:12:18.169 "w_mbytes_per_sec": 0 00:12:18.169 }, 00:12:18.169 "claimed": true, 00:12:18.169 "claim_type": "exclusive_write", 00:12:18.169 "zoned": false, 00:12:18.169 "supported_io_types": { 00:12:18.169 "read": true, 00:12:18.169 "write": true, 00:12:18.169 "unmap": true, 00:12:18.169 "flush": true, 00:12:18.169 "reset": true, 00:12:18.169 "nvme_admin": false, 00:12:18.169 "nvme_io": false, 00:12:18.169 "nvme_io_md": false, 00:12:18.169 "write_zeroes": true, 00:12:18.169 "zcopy": true, 00:12:18.169 "get_zone_info": false, 00:12:18.169 "zone_management": false, 00:12:18.169 "zone_append": false, 00:12:18.169 "compare": false, 00:12:18.169 "compare_and_write": false, 00:12:18.169 "abort": true, 00:12:18.169 "seek_hole": false, 00:12:18.169 "seek_data": false, 00:12:18.169 "copy": true, 00:12:18.169 "nvme_iov_md": false 00:12:18.169 }, 00:12:18.169 "memory_domains": [ 00:12:18.169 { 00:12:18.169 "dma_device_id": "system", 00:12:18.170 "dma_device_type": 1 00:12:18.170 }, 00:12:18.170 { 00:12:18.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.170 "dma_device_type": 2 00:12:18.170 } 00:12:18.170 ], 00:12:18.170 "driver_specific": {} 00:12:18.170 } 00:12:18.170 ]' 00:12:18.170 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:18.170 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:18.170 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:18.170 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:18.170 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:18.170 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:18.170 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:18.170 08:11:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:20.079 08:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:20.079 08:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:20.079 08:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:20.079 08:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:20.079 08:11:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:21.991 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:21.991 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:21.991 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:21.991 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:21.991 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:21.991 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:21.991 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:21.991 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:21.991 08:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:21.991 08:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:21.991 08:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:21.991 08:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:21.991 08:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:21.991 08:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:21.991 08:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:21.991 08:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:21.991 08:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:22.252 08:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:22.252 08:11:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:23.214 08:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:23.214 08:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:23.214 08:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:23.214 08:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.214 08:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.496 ************************************ 00:12:23.496 START TEST filesystem_in_capsule_ext4 00:12:23.496 ************************************ 00:12:23.496 08:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:23.496 08:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:23.497 08:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:23.497 08:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:23.497 08:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:23.497 08:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:23.497 08:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:23.497 08:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:23.497 08:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:23.497 08:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:23.497 08:11:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:23.497 mke2fs 1.47.0 (5-Feb-2023) 00:12:23.497 Discarding device blocks: 0/522240 done 00:12:23.497 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:23.497 Filesystem UUID: 81d94012-69f7-49ce-96bf-d320650e96d5 00:12:23.497 Superblock backups stored on blocks: 00:12:23.497 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:23.497 00:12:23.497 Allocating group tables: 0/64 done 00:12:23.497 Writing inode tables: 0/64 done 00:12:23.776 Creating journal (8192 blocks): done 00:12:26.110 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:12:26.110 00:12:26.110 08:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:26.110 08:11:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1853921 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:32.696 00:12:32.696 real 0m8.797s 00:12:32.696 user 0m0.044s 00:12:32.696 sys 0m0.069s 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:32.696 ************************************ 00:12:32.696 END TEST filesystem_in_capsule_ext4 00:12:32.696 ************************************ 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:32.696 ************************************ 00:12:32.696 START TEST filesystem_in_capsule_btrfs 00:12:32.696 ************************************ 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:32.696 btrfs-progs v6.8.1 00:12:32.696 See https://btrfs.readthedocs.io for more information. 00:12:32.696 00:12:32.696 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:32.696 NOTE: several default settings have changed in version 5.15, please make sure 00:12:32.696 this does not affect your deployments: 00:12:32.696 - DUP for metadata (-m dup) 00:12:32.696 - enabled no-holes (-O no-holes) 00:12:32.696 - enabled free-space-tree (-R free-space-tree) 00:12:32.696 00:12:32.696 Label: (null) 00:12:32.696 UUID: 871e5f7e-a04e-4677-8962-abfdf4e2938f 00:12:32.696 Node size: 16384 00:12:32.696 Sector size: 4096 (CPU page size: 4096) 00:12:32.696 Filesystem size: 510.00MiB 00:12:32.696 Block group profiles: 00:12:32.696 Data: single 8.00MiB 00:12:32.696 Metadata: DUP 32.00MiB 00:12:32.696 System: DUP 8.00MiB 00:12:32.696 SSD detected: yes 00:12:32.696 Zoned device: no 00:12:32.696 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:32.696 Checksum: crc32c 00:12:32.696 Number of devices: 1 00:12:32.696 Devices: 00:12:32.696 ID SIZE PATH 00:12:32.696 1 510.00MiB /dev/nvme0n1p1 00:12:32.696 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1853921 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:32.696 00:12:32.696 real 0m0.596s 00:12:32.696 user 0m0.034s 00:12:32.696 sys 0m0.114s 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.696 08:11:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:32.696 ************************************ 00:12:32.696 END TEST filesystem_in_capsule_btrfs 00:12:32.696 ************************************ 00:12:32.958 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:32.958 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:32.958 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.958 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:32.958 ************************************ 00:12:32.958 START TEST filesystem_in_capsule_xfs 00:12:32.958 ************************************ 00:12:32.958 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:32.958 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:32.958 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:32.958 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:32.958 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:32.958 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:32.958 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:32.958 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:32.958 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:32.958 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:32.958 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:32.958 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:32.958 = sectsz=512 attr=2, projid32bit=1 00:12:32.958 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:32.958 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:32.958 data = bsize=4096 blocks=130560, imaxpct=25 00:12:32.958 = sunit=0 swidth=0 blks 00:12:32.958 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:32.958 log =internal log bsize=4096 blocks=16384, version=2 00:12:32.958 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:32.958 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:33.902 Discarding blocks...Done. 00:12:33.902 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:33.902 08:11:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:35.815 08:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:35.815 08:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:35.815 08:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:35.815 08:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:35.815 08:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:35.815 08:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:35.815 08:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1853921 00:12:35.815 08:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:35.815 08:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:35.815 08:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:35.815 08:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:35.815 00:12:35.815 real 0m2.825s 00:12:35.815 user 0m0.023s 00:12:35.815 sys 0m0.083s 00:12:35.815 08:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:35.815 08:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:35.815 ************************************ 00:12:35.815 END TEST filesystem_in_capsule_xfs 00:12:35.815 ************************************ 00:12:35.815 08:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:35.815 08:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:35.815 08:11:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:35.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.815 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:35.815 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:35.815 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:35.815 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.815 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:35.815 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.815 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:35.815 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:35.815 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.815 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:35.815 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.815 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:35.815 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1853921 00:12:35.815 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1853921 ']' 00:12:35.815 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1853921 00:12:35.815 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:35.815 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:35.815 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1853921 00:12:36.076 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:36.076 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:36.076 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1853921' 00:12:36.076 killing process with pid 1853921 00:12:36.076 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1853921 00:12:36.076 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1853921 00:12:36.076 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:36.076 00:12:36.076 real 0m19.144s 00:12:36.076 user 1m15.715s 00:12:36.076 sys 0m1.428s 00:12:36.076 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:36.076 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:36.076 ************************************ 00:12:36.076 END TEST nvmf_filesystem_in_capsule 00:12:36.076 ************************************ 00:12:36.339 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:36.339 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:36.339 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:36.339 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:36.339 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:36.339 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:36.339 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:36.339 rmmod nvme_tcp 00:12:36.339 rmmod nvme_fabrics 00:12:36.339 rmmod nvme_keyring 00:12:36.339 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:36.339 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:36.339 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:36.339 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:36.339 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:36.339 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:36.339 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:36.339 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:36.339 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:36.339 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:36.339 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:36.339 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:36.339 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:36.339 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.339 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:36.339 08:11:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:38.884 00:12:38.884 real 0m46.449s 00:12:38.884 user 2m24.516s 00:12:38.884 sys 0m8.981s 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:38.884 ************************************ 00:12:38.884 END TEST nvmf_filesystem 00:12:38.884 ************************************ 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:38.884 ************************************ 00:12:38.884 START TEST nvmf_target_discovery 00:12:38.884 ************************************ 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:38.884 * Looking for test storage... 00:12:38.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:38.884 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:38.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.885 --rc genhtml_branch_coverage=1 00:12:38.885 --rc genhtml_function_coverage=1 00:12:38.885 --rc genhtml_legend=1 00:12:38.885 --rc geninfo_all_blocks=1 00:12:38.885 --rc geninfo_unexecuted_blocks=1 00:12:38.885 00:12:38.885 ' 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:38.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.885 --rc genhtml_branch_coverage=1 00:12:38.885 --rc genhtml_function_coverage=1 00:12:38.885 --rc genhtml_legend=1 00:12:38.885 --rc geninfo_all_blocks=1 00:12:38.885 --rc geninfo_unexecuted_blocks=1 00:12:38.885 00:12:38.885 ' 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:38.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.885 --rc genhtml_branch_coverage=1 00:12:38.885 --rc genhtml_function_coverage=1 00:12:38.885 --rc genhtml_legend=1 00:12:38.885 --rc geninfo_all_blocks=1 00:12:38.885 --rc geninfo_unexecuted_blocks=1 00:12:38.885 00:12:38.885 ' 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:38.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.885 --rc genhtml_branch_coverage=1 00:12:38.885 --rc genhtml_function_coverage=1 00:12:38.885 --rc genhtml_legend=1 00:12:38.885 --rc geninfo_all_blocks=1 00:12:38.885 --rc geninfo_unexecuted_blocks=1 00:12:38.885 00:12:38.885 ' 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:38.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:38.885 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.029 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:47.029 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:47.029 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:47.029 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:47.029 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:47.029 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:47.029 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:47.029 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:47.029 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:47.029 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:47.029 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:47.029 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:47.029 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:47.029 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:47.029 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:47.029 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:47.029 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:47.029 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:47.029 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:47.029 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:47.029 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:47.029 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:47.029 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:47.029 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:47.029 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:47.029 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:47.029 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:47.029 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:47.029 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:47.029 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:47.029 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:47.029 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:47.029 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:47.029 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:47.029 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:47.029 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:47.029 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:47.030 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:47.030 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:47.030 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:47.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:47.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:12:47.030 00:12:47.030 --- 10.0.0.2 ping statistics --- 00:12:47.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.030 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:47.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:47.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:12:47.030 00:12:47.030 --- 10.0.0.1 ping statistics --- 00:12:47.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.030 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:47.030 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:47.031 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:47.031 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:47.031 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:47.031 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.031 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1862066 00:12:47.031 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1862066 00:12:47.031 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:47.031 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1862066 ']' 00:12:47.031 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.031 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:47.031 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.031 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:47.031 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.031 [2024-11-28 08:11:43.480347] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:12:47.031 [2024-11-28 08:11:43.480415] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.031 [2024-11-28 08:11:43.582401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:47.031 [2024-11-28 08:11:43.636092] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:47.031 [2024-11-28 08:11:43.636149] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:47.031 [2024-11-28 08:11:43.636171] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:47.031 [2024-11-28 08:11:43.636178] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:47.031 [2024-11-28 08:11:43.636185] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:47.031 [2024-11-28 08:11:43.638252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:47.031 [2024-11-28 08:11:43.638451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.031 [2024-11-28 08:11:43.638452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:47.031 [2024-11-28 08:11:43.638305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:47.031 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:47.031 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:47.031 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:47.031 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:47.031 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.293 [2024-11-28 08:11:44.356155] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.293 Null1 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.293 [2024-11-28 08:11:44.424448] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.293 Null2 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.293 Null3 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.293 Null4 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.293 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.294 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.294 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:47.294 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.294 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.556 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.556 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:47.556 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.556 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.556 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.556 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:12:47.556 00:12:47.556 Discovery Log Number of Records 6, Generation counter 6 00:12:47.556 =====Discovery Log Entry 0====== 00:12:47.556 trtype: tcp 00:12:47.556 adrfam: ipv4 00:12:47.556 subtype: current discovery subsystem 00:12:47.556 treq: not required 00:12:47.556 portid: 0 00:12:47.556 trsvcid: 4420 00:12:47.556 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:47.556 traddr: 10.0.0.2 00:12:47.556 eflags: explicit discovery connections, duplicate discovery information 00:12:47.556 sectype: none 00:12:47.556 =====Discovery Log Entry 1====== 00:12:47.556 trtype: tcp 00:12:47.556 adrfam: ipv4 00:12:47.556 subtype: nvme subsystem 00:12:47.556 treq: not required 00:12:47.556 portid: 0 00:12:47.556 trsvcid: 4420 00:12:47.556 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:47.556 traddr: 10.0.0.2 00:12:47.556 eflags: none 00:12:47.556 sectype: none 00:12:47.556 =====Discovery Log Entry 2====== 00:12:47.556 trtype: tcp 00:12:47.556 adrfam: ipv4 00:12:47.556 subtype: nvme subsystem 00:12:47.556 treq: not required 00:12:47.556 portid: 0 00:12:47.556 trsvcid: 4420 00:12:47.556 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:47.556 traddr: 10.0.0.2 00:12:47.556 eflags: none 00:12:47.556 sectype: none 00:12:47.556 =====Discovery Log Entry 3====== 00:12:47.556 trtype: tcp 00:12:47.556 adrfam: ipv4 00:12:47.556 subtype: nvme subsystem 00:12:47.556 treq: not required 00:12:47.556 portid: 0 00:12:47.556 trsvcid: 4420 00:12:47.556 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:47.556 traddr: 10.0.0.2 00:12:47.556 eflags: none 00:12:47.556 sectype: none 00:12:47.556 =====Discovery Log Entry 4====== 00:12:47.556 trtype: tcp 00:12:47.556 adrfam: ipv4 00:12:47.556 subtype: nvme subsystem 00:12:47.556 treq: not required 00:12:47.556 portid: 0 00:12:47.556 trsvcid: 4420 00:12:47.556 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:47.556 traddr: 10.0.0.2 00:12:47.556 eflags: none 00:12:47.556 sectype: none 00:12:47.556 =====Discovery Log Entry 5====== 00:12:47.556 trtype: tcp 00:12:47.556 adrfam: ipv4 00:12:47.556 subtype: discovery subsystem referral 00:12:47.556 treq: not required 00:12:47.556 portid: 0 00:12:47.556 trsvcid: 4430 00:12:47.556 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:47.556 traddr: 10.0.0.2 00:12:47.556 eflags: none 00:12:47.556 sectype: none 00:12:47.556 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:47.556 Perform nvmf subsystem discovery via RPC 00:12:47.556 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:47.556 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.556 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.556 [ 00:12:47.556 { 00:12:47.556 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:47.556 "subtype": "Discovery", 00:12:47.556 "listen_addresses": [ 00:12:47.556 { 00:12:47.556 "trtype": "TCP", 00:12:47.556 "adrfam": "IPv4", 00:12:47.556 "traddr": "10.0.0.2", 00:12:47.556 "trsvcid": "4420" 00:12:47.556 } 00:12:47.556 ], 00:12:47.556 "allow_any_host": true, 00:12:47.556 "hosts": [] 00:12:47.556 }, 00:12:47.556 { 00:12:47.556 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:47.556 "subtype": "NVMe", 00:12:47.556 "listen_addresses": [ 00:12:47.556 { 00:12:47.556 "trtype": "TCP", 00:12:47.556 "adrfam": "IPv4", 00:12:47.556 "traddr": "10.0.0.2", 00:12:47.556 "trsvcid": "4420" 00:12:47.556 } 00:12:47.556 ], 00:12:47.556 "allow_any_host": true, 00:12:47.556 "hosts": [], 00:12:47.556 "serial_number": "SPDK00000000000001", 00:12:47.556 "model_number": "SPDK bdev Controller", 00:12:47.556 "max_namespaces": 32, 00:12:47.556 "min_cntlid": 1, 00:12:47.556 "max_cntlid": 65519, 00:12:47.556 "namespaces": [ 00:12:47.556 { 00:12:47.556 "nsid": 1, 00:12:47.556 "bdev_name": "Null1", 00:12:47.556 "name": "Null1", 00:12:47.556 "nguid": "E94A8B877B4144EB9C7A28B0256B4D9D", 00:12:47.556 "uuid": "e94a8b87-7b41-44eb-9c7a-28b0256b4d9d" 00:12:47.556 } 00:12:47.556 ] 00:12:47.556 }, 00:12:47.556 { 00:12:47.556 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:47.556 "subtype": "NVMe", 00:12:47.556 "listen_addresses": [ 00:12:47.556 { 00:12:47.556 "trtype": "TCP", 00:12:47.556 "adrfam": "IPv4", 00:12:47.556 "traddr": "10.0.0.2", 00:12:47.556 "trsvcid": "4420" 00:12:47.556 } 00:12:47.556 ], 00:12:47.556 "allow_any_host": true, 00:12:47.556 "hosts": [], 00:12:47.556 "serial_number": "SPDK00000000000002", 00:12:47.556 "model_number": "SPDK bdev Controller", 00:12:47.556 "max_namespaces": 32, 00:12:47.556 "min_cntlid": 1, 00:12:47.556 "max_cntlid": 65519, 00:12:47.556 "namespaces": [ 00:12:47.556 { 00:12:47.556 "nsid": 1, 00:12:47.556 "bdev_name": "Null2", 00:12:47.556 "name": "Null2", 00:12:47.556 "nguid": "A30EE5BDECDE47E2BB517E542D8CB92F", 00:12:47.556 "uuid": "a30ee5bd-ecde-47e2-bb51-7e542d8cb92f" 00:12:47.556 } 00:12:47.556 ] 00:12:47.556 }, 00:12:47.556 { 00:12:47.556 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:47.556 "subtype": "NVMe", 00:12:47.556 "listen_addresses": [ 00:12:47.556 { 00:12:47.556 "trtype": "TCP", 00:12:47.556 "adrfam": "IPv4", 00:12:47.556 "traddr": "10.0.0.2", 00:12:47.556 "trsvcid": "4420" 00:12:47.556 } 00:12:47.556 ], 00:12:47.556 "allow_any_host": true, 00:12:47.556 "hosts": [], 00:12:47.556 "serial_number": "SPDK00000000000003", 00:12:47.556 "model_number": "SPDK bdev Controller", 00:12:47.556 "max_namespaces": 32, 00:12:47.556 "min_cntlid": 1, 00:12:47.556 "max_cntlid": 65519, 00:12:47.556 "namespaces": [ 00:12:47.556 { 00:12:47.556 "nsid": 1, 00:12:47.556 "bdev_name": "Null3", 00:12:47.556 "name": "Null3", 00:12:47.556 "nguid": "19C472557CA944E6A503CB5D28613A6C", 00:12:47.556 "uuid": "19c47255-7ca9-44e6-a503-cb5d28613a6c" 00:12:47.556 } 00:12:47.556 ] 00:12:47.556 }, 00:12:47.556 { 00:12:47.556 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:47.556 "subtype": "NVMe", 00:12:47.556 "listen_addresses": [ 00:12:47.556 { 00:12:47.556 "trtype": "TCP", 00:12:47.556 "adrfam": "IPv4", 00:12:47.556 "traddr": "10.0.0.2", 00:12:47.556 "trsvcid": "4420" 00:12:47.556 } 00:12:47.556 ], 00:12:47.556 "allow_any_host": true, 00:12:47.556 "hosts": [], 00:12:47.556 "serial_number": "SPDK00000000000004", 00:12:47.556 "model_number": "SPDK bdev Controller", 00:12:47.556 "max_namespaces": 32, 00:12:47.556 "min_cntlid": 1, 00:12:47.556 "max_cntlid": 65519, 00:12:47.556 "namespaces": [ 00:12:47.556 { 00:12:47.556 "nsid": 1, 00:12:47.556 "bdev_name": "Null4", 00:12:47.556 "name": "Null4", 00:12:47.556 "nguid": "3CB04CFB5A784CF09DB5C7C2C1DA9053", 00:12:47.556 "uuid": "3cb04cfb-5a78-4cf0-9db5-c7c2c1da9053" 00:12:47.556 } 00:12:47.556 ] 00:12:47.556 } 00:12:47.556 ] 00:12:47.556 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.556 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:47.556 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:47.556 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.556 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.556 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.557 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.557 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:47.557 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.557 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.557 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.557 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:47.557 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:47.557 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.557 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.557 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.557 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:47.557 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.557 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.557 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.557 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:47.557 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:47.557 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.557 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.557 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.557 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:47.557 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.557 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.557 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.819 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:47.819 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:47.819 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.819 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.819 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.819 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:47.819 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.819 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.819 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.819 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:47.819 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.819 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.819 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.819 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:47.819 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:47.819 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.819 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.819 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.819 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:47.819 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:47.819 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:47.819 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:47.819 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:47.820 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:47.820 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:47.820 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:47.820 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:47.820 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:47.820 rmmod nvme_tcp 00:12:47.820 rmmod nvme_fabrics 00:12:47.820 rmmod nvme_keyring 00:12:47.820 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:47.820 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:47.820 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:47.820 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1862066 ']' 00:12:47.820 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1862066 00:12:47.820 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1862066 ']' 00:12:47.820 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1862066 00:12:47.820 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:47.820 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:47.820 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1862066 00:12:47.820 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:47.820 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:47.820 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1862066' 00:12:47.820 killing process with pid 1862066 00:12:47.820 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1862066 00:12:47.820 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1862066 00:12:48.081 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:48.081 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:48.081 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:48.081 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:48.081 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:48.081 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:48.081 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:48.081 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:48.081 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:48.081 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.081 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:48.081 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:50.631 00:12:50.631 real 0m11.669s 00:12:50.631 user 0m8.697s 00:12:50.631 sys 0m6.179s 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.631 ************************************ 00:12:50.631 END TEST nvmf_target_discovery 00:12:50.631 ************************************ 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:50.631 ************************************ 00:12:50.631 START TEST nvmf_referrals 00:12:50.631 ************************************ 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:50.631 * Looking for test storage... 00:12:50.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:50.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.631 --rc genhtml_branch_coverage=1 00:12:50.631 --rc genhtml_function_coverage=1 00:12:50.631 --rc genhtml_legend=1 00:12:50.631 --rc geninfo_all_blocks=1 00:12:50.631 --rc geninfo_unexecuted_blocks=1 00:12:50.631 00:12:50.631 ' 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:50.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.631 --rc genhtml_branch_coverage=1 00:12:50.631 --rc genhtml_function_coverage=1 00:12:50.631 --rc genhtml_legend=1 00:12:50.631 --rc geninfo_all_blocks=1 00:12:50.631 --rc geninfo_unexecuted_blocks=1 00:12:50.631 00:12:50.631 ' 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:50.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.631 --rc genhtml_branch_coverage=1 00:12:50.631 --rc genhtml_function_coverage=1 00:12:50.631 --rc genhtml_legend=1 00:12:50.631 --rc geninfo_all_blocks=1 00:12:50.631 --rc geninfo_unexecuted_blocks=1 00:12:50.631 00:12:50.631 ' 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:50.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.631 --rc genhtml_branch_coverage=1 00:12:50.631 --rc genhtml_function_coverage=1 00:12:50.631 --rc genhtml_legend=1 00:12:50.631 --rc geninfo_all_blocks=1 00:12:50.631 --rc geninfo_unexecuted_blocks=1 00:12:50.631 00:12:50.631 ' 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.631 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:50.632 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.632 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:50.632 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:50.632 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:50.632 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:50.632 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:50.632 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:50.632 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:50.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:50.632 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:50.632 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:50.632 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:50.632 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:50.632 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:50.632 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:50.632 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:50.632 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:50.632 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:50.632 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:50.632 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:50.632 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:50.632 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:50.632 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:50.632 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:50.632 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.632 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.632 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.632 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:50.632 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:50.632 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:50.632 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:58.777 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:58.778 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:58.778 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:58.778 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:58.778 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:58.778 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:58.778 08:11:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:58.778 08:11:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:58.778 08:11:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:58.778 08:11:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:58.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:58.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:12:58.778 00:12:58.778 --- 10.0.0.2 ping statistics --- 00:12:58.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.778 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:12:58.778 08:11:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:58.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:58.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:12:58.778 00:12:58.778 --- 10.0.0.1 ping statistics --- 00:12:58.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:58.778 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:12:58.778 08:11:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:58.778 08:11:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:58.778 08:11:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:58.778 08:11:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:58.778 08:11:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:58.778 08:11:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:58.778 08:11:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:58.778 08:11:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:58.778 08:11:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:58.778 08:11:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:58.779 08:11:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:58.779 08:11:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:58.779 08:11:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:58.779 08:11:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1866550 00:12:58.779 08:11:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1866550 00:12:58.779 08:11:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:58.779 08:11:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1866550 ']' 00:12:58.779 08:11:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.779 08:11:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:58.779 08:11:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.779 08:11:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:58.779 08:11:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:58.779 [2024-11-28 08:11:55.231581] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:12:58.779 [2024-11-28 08:11:55.231645] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:58.779 [2024-11-28 08:11:55.332925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:58.779 [2024-11-28 08:11:55.386880] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:58.779 [2024-11-28 08:11:55.386937] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:58.779 [2024-11-28 08:11:55.386946] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:58.779 [2024-11-28 08:11:55.386953] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:58.779 [2024-11-28 08:11:55.386959] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:58.779 [2024-11-28 08:11:55.388999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.779 [2024-11-28 08:11:55.389175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:58.779 [2024-11-28 08:11:55.389323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:58.779 [2024-11-28 08:11:55.389421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.779 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:58.779 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:58.779 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:58.779 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:58.779 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:59.039 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.039 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:59.039 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.039 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:59.039 [2024-11-28 08:11:56.111348] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:59.039 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.039 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:59.039 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.039 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:59.039 [2024-11-28 08:11:56.139462] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:59.039 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.039 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:59.039 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.039 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:59.039 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.039 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:59.039 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.039 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:59.039 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.039 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:59.039 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.039 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:59.039 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.039 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:59.039 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:59.039 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.039 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:59.039 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.039 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:59.039 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:59.039 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:59.039 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:59.039 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:59.039 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.039 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:59.039 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:59.039 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.039 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:59.040 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:59.040 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:59.040 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:59.040 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:59.040 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:59.040 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:59.040 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:59.300 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:59.300 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:59.300 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:59.300 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.300 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:59.300 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.300 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:59.300 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.300 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:59.300 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.300 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:59.300 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.300 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:59.300 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.300 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:59.300 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.300 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:59.300 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:59.300 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.300 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:59.300 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:59.300 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:59.300 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:59.300 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:59.300 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:59.300 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:59.561 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:59.561 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:59.561 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:59.561 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.561 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:59.561 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.561 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:59.561 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.561 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:59.561 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.561 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:59.561 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:59.561 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:59.561 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:59.561 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.561 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:59.561 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:59.561 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.822 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:59.822 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:59.822 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:59.822 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:59.822 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:59.822 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:59.822 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:59.822 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:59.822 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:59.822 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:59.822 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:59.822 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:59.822 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:59.822 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:59.822 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:00.084 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:00.084 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:00.084 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:00.084 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:00.084 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:00.084 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:00.347 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:00.347 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:00.347 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.347 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:00.347 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.347 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:00.347 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:00.347 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:00.347 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:00.347 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.347 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:00.347 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:00.347 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.347 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:00.347 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:00.347 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:00.347 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:00.347 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:00.347 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:00.347 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:00.347 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:00.608 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:00.608 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:00.608 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:00.608 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:00.608 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:00.608 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:00.608 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:00.608 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:00.608 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:00.608 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:00.608 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:00.608 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:00.868 08:11:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:00.868 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:00.868 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:00.868 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.868 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:00.868 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.868 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:00.868 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:00.868 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.868 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:00.868 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.868 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:00.868 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:00.869 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:00.869 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:00.869 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:00.869 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:00.869 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:01.129 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:01.129 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:01.129 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:01.129 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:01.129 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:01.129 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:13:01.129 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:01.129 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:13:01.129 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:01.129 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:01.129 rmmod nvme_tcp 00:13:01.129 rmmod nvme_fabrics 00:13:01.129 rmmod nvme_keyring 00:13:01.390 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:01.390 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:13:01.390 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:13:01.390 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1866550 ']' 00:13:01.390 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1866550 00:13:01.390 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1866550 ']' 00:13:01.390 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1866550 00:13:01.390 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:13:01.390 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:01.390 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1866550 00:13:01.390 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:01.390 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:01.390 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1866550' 00:13:01.390 killing process with pid 1866550 00:13:01.390 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1866550 00:13:01.390 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1866550 00:13:01.390 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:01.391 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:01.391 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:01.391 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:13:01.391 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:13:01.391 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:01.391 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:13:01.391 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:01.391 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:01.391 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.391 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:01.391 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.940 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:03.940 00:13:03.940 real 0m13.316s 00:13:03.940 user 0m16.109s 00:13:03.940 sys 0m6.556s 00:13:03.940 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:03.940 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:03.940 ************************************ 00:13:03.940 END TEST nvmf_referrals 00:13:03.940 ************************************ 00:13:03.940 08:12:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:03.940 08:12:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:03.940 08:12:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:03.941 ************************************ 00:13:03.941 START TEST nvmf_connect_disconnect 00:13:03.941 ************************************ 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:03.941 * Looking for test storage... 00:13:03.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:03.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.941 --rc genhtml_branch_coverage=1 00:13:03.941 --rc genhtml_function_coverage=1 00:13:03.941 --rc genhtml_legend=1 00:13:03.941 --rc geninfo_all_blocks=1 00:13:03.941 --rc geninfo_unexecuted_blocks=1 00:13:03.941 00:13:03.941 ' 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:03.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.941 --rc genhtml_branch_coverage=1 00:13:03.941 --rc genhtml_function_coverage=1 00:13:03.941 --rc genhtml_legend=1 00:13:03.941 --rc geninfo_all_blocks=1 00:13:03.941 --rc geninfo_unexecuted_blocks=1 00:13:03.941 00:13:03.941 ' 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:03.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.941 --rc genhtml_branch_coverage=1 00:13:03.941 --rc genhtml_function_coverage=1 00:13:03.941 --rc genhtml_legend=1 00:13:03.941 --rc geninfo_all_blocks=1 00:13:03.941 --rc geninfo_unexecuted_blocks=1 00:13:03.941 00:13:03.941 ' 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:03.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.941 --rc genhtml_branch_coverage=1 00:13:03.941 --rc genhtml_function_coverage=1 00:13:03.941 --rc genhtml_legend=1 00:13:03.941 --rc geninfo_all_blocks=1 00:13:03.941 --rc geninfo_unexecuted_blocks=1 00:13:03.941 00:13:03.941 ' 00:13:03.941 08:12:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:03.941 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:03.941 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:03.941 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:03.941 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:03.941 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:03.941 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:03.941 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:03.941 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:03.941 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:03.941 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:03.941 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:03.941 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:03.941 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:03.941 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:03.941 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:03.941 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:03.941 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:03.941 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:03.941 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:13:03.941 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:03.941 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:03.941 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:03.941 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.941 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.941 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.941 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:03.941 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.941 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:13:03.942 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:03.942 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:03.942 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:03.942 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:03.942 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:03.942 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:03.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:03.942 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:03.942 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:03.942 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:03.942 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:03.942 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:03.942 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:03.942 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:03.942 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:03.942 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:03.942 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:03.942 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:03.942 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.942 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:03.942 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.942 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:03.942 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:03.942 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:13:03.942 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:12.083 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:12.083 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:12.083 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:12.083 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:12.083 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:12.083 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:12.083 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:12.083 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:12.083 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:12.083 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:12.083 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:12.083 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:12.083 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:12.083 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:12.083 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:12.083 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:12.083 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:12.084 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:12.084 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:12.084 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:12.084 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:12.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:12.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.690 ms 00:13:12.084 00:13:12.084 --- 10.0.0.2 ping statistics --- 00:13:12.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.084 rtt min/avg/max/mdev = 0.690/0.690/0.690/0.000 ms 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:12.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:12.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:13:12.084 00:13:12.084 --- 10.0.0.1 ping statistics --- 00:13:12.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.084 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:12.084 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:12.085 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:12.085 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1871727 00:13:12.085 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1871727 00:13:12.085 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:12.085 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1871727 ']' 00:13:12.085 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.085 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:12.085 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.085 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:12.085 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:12.085 [2024-11-28 08:12:08.641155] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:13:12.085 [2024-11-28 08:12:08.641230] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.085 [2024-11-28 08:12:08.742871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:12.085 [2024-11-28 08:12:08.796849] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:12.085 [2024-11-28 08:12:08.796910] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:12.085 [2024-11-28 08:12:08.796919] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:12.085 [2024-11-28 08:12:08.796926] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:12.085 [2024-11-28 08:12:08.796932] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:12.085 [2024-11-28 08:12:08.799390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:12.085 [2024-11-28 08:12:08.799556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:12.085 [2024-11-28 08:12:08.799720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.085 [2024-11-28 08:12:08.799720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:12.346 08:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:12.346 08:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:13:12.346 08:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:12.346 08:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:12.346 08:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:12.346 08:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:12.346 08:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:12.346 08:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.346 08:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:12.346 [2024-11-28 08:12:09.521732] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:12.346 08:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.346 08:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:12.346 08:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.346 08:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:12.346 08:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.346 08:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:12.346 08:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:12.346 08:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.346 08:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:12.346 08:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.346 08:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:12.346 08:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.346 08:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:12.346 08:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.346 08:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.346 08:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.346 08:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:12.346 [2024-11-28 08:12:09.606590] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.346 08:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.346 08:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:13:12.346 08:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:13:12.346 08:12:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:16.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.060 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.663 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:30.663 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:30.663 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:30.663 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:13:30.663 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:30.663 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:13:30.663 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:30.663 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:30.663 rmmod nvme_tcp 00:13:30.663 rmmod nvme_fabrics 00:13:30.663 rmmod nvme_keyring 00:13:30.663 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:30.663 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:13:30.663 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:13:30.663 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1871727 ']' 00:13:30.663 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1871727 00:13:30.663 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1871727 ']' 00:13:30.663 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1871727 00:13:30.663 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:13:30.663 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:30.663 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1871727 00:13:30.663 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:30.663 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:30.663 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1871727' 00:13:30.663 killing process with pid 1871727 00:13:30.663 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1871727 00:13:30.663 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1871727 00:13:30.924 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:30.924 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:30.924 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:30.924 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:13:30.924 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:13:30.924 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:30.924 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:13:30.924 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:30.924 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:30.924 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.924 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:30.924 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.892 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:32.892 00:13:32.892 real 0m29.313s 00:13:32.892 user 1m18.652s 00:13:32.892 sys 0m7.250s 00:13:32.892 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:32.892 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:32.892 ************************************ 00:13:32.892 END TEST nvmf_connect_disconnect 00:13:32.892 ************************************ 00:13:32.892 08:12:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:32.892 08:12:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:32.892 08:12:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:32.892 08:12:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:33.171 ************************************ 00:13:33.171 START TEST nvmf_multitarget 00:13:33.171 ************************************ 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:33.171 * Looking for test storage... 00:13:33.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:33.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.171 --rc genhtml_branch_coverage=1 00:13:33.171 --rc genhtml_function_coverage=1 00:13:33.171 --rc genhtml_legend=1 00:13:33.171 --rc geninfo_all_blocks=1 00:13:33.171 --rc geninfo_unexecuted_blocks=1 00:13:33.171 00:13:33.171 ' 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:33.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.171 --rc genhtml_branch_coverage=1 00:13:33.171 --rc genhtml_function_coverage=1 00:13:33.171 --rc genhtml_legend=1 00:13:33.171 --rc geninfo_all_blocks=1 00:13:33.171 --rc geninfo_unexecuted_blocks=1 00:13:33.171 00:13:33.171 ' 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:33.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.171 --rc genhtml_branch_coverage=1 00:13:33.171 --rc genhtml_function_coverage=1 00:13:33.171 --rc genhtml_legend=1 00:13:33.171 --rc geninfo_all_blocks=1 00:13:33.171 --rc geninfo_unexecuted_blocks=1 00:13:33.171 00:13:33.171 ' 00:13:33.171 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:33.171 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.171 --rc genhtml_branch_coverage=1 00:13:33.171 --rc genhtml_function_coverage=1 00:13:33.171 --rc genhtml_legend=1 00:13:33.171 --rc geninfo_all_blocks=1 00:13:33.172 --rc geninfo_unexecuted_blocks=1 00:13:33.172 00:13:33.172 ' 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:33.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:13:33.172 08:12:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:41.370 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:41.370 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:13:41.370 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:41.370 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:41.370 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:41.370 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:41.370 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:41.370 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:13:41.370 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:41.370 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:13:41.370 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:13:41.370 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:13:41.370 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:13:41.370 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:13:41.370 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:13:41.370 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:41.370 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:41.370 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:41.370 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:41.370 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:41.370 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:41.371 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:41.371 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:41.371 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:41.371 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:41.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:41.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:13:41.371 00:13:41.371 --- 10.0.0.2 ping statistics --- 00:13:41.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.371 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:41.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:41.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:13:41.371 00:13:41.371 --- 10.0.0.1 ping statistics --- 00:13:41.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.371 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1880031 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1880031 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1880031 ']' 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.371 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:41.372 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.372 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:41.372 08:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:41.372 [2024-11-28 08:12:37.984334] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:13:41.372 [2024-11-28 08:12:37.984401] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:41.372 [2024-11-28 08:12:38.084320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:41.372 [2024-11-28 08:12:38.137211] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:41.372 [2024-11-28 08:12:38.137265] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:41.372 [2024-11-28 08:12:38.137274] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:41.372 [2024-11-28 08:12:38.137282] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:41.372 [2024-11-28 08:12:38.137289] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:41.372 [2024-11-28 08:12:38.139640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.372 [2024-11-28 08:12:38.139800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:41.372 [2024-11-28 08:12:38.139926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.372 [2024-11-28 08:12:38.139926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:41.633 08:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:41.633 08:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:13:41.633 08:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:41.633 08:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:41.633 08:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:41.633 08:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:41.633 08:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:41.633 08:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:41.633 08:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:41.895 08:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:41.895 08:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:41.895 "nvmf_tgt_1" 00:13:41.895 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:42.156 "nvmf_tgt_2" 00:13:42.156 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:42.156 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:42.156 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:42.156 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:42.156 true 00:13:42.156 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:42.417 true 00:13:42.417 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:42.417 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:42.417 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:42.417 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:42.417 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:42.417 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:42.417 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:13:42.417 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:42.417 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:13:42.417 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:42.417 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:42.417 rmmod nvme_tcp 00:13:42.417 rmmod nvme_fabrics 00:13:42.678 rmmod nvme_keyring 00:13:42.678 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:42.678 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:13:42.678 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:13:42.678 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1880031 ']' 00:13:42.678 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1880031 00:13:42.678 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1880031 ']' 00:13:42.678 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1880031 00:13:42.678 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:13:42.678 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:42.678 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1880031 00:13:42.678 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:42.678 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:42.678 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1880031' 00:13:42.678 killing process with pid 1880031 00:13:42.678 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1880031 00:13:42.678 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1880031 00:13:42.939 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:42.939 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:42.939 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:42.939 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:13:42.939 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:13:42.939 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:42.939 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:13:42.939 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:42.939 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:42.939 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.939 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:42.939 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.852 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:44.852 00:13:44.852 real 0m11.875s 00:13:44.852 user 0m10.362s 00:13:44.852 sys 0m6.180s 00:13:44.852 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:44.852 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:44.852 ************************************ 00:13:44.852 END TEST nvmf_multitarget 00:13:44.852 ************************************ 00:13:44.852 08:12:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:44.852 08:12:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:44.852 08:12:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:44.852 08:12:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:45.114 ************************************ 00:13:45.114 START TEST nvmf_rpc 00:13:45.114 ************************************ 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:45.114 * Looking for test storage... 00:13:45.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:45.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.114 --rc genhtml_branch_coverage=1 00:13:45.114 --rc genhtml_function_coverage=1 00:13:45.114 --rc genhtml_legend=1 00:13:45.114 --rc geninfo_all_blocks=1 00:13:45.114 --rc geninfo_unexecuted_blocks=1 00:13:45.114 00:13:45.114 ' 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:45.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.114 --rc genhtml_branch_coverage=1 00:13:45.114 --rc genhtml_function_coverage=1 00:13:45.114 --rc genhtml_legend=1 00:13:45.114 --rc geninfo_all_blocks=1 00:13:45.114 --rc geninfo_unexecuted_blocks=1 00:13:45.114 00:13:45.114 ' 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:45.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.114 --rc genhtml_branch_coverage=1 00:13:45.114 --rc genhtml_function_coverage=1 00:13:45.114 --rc genhtml_legend=1 00:13:45.114 --rc geninfo_all_blocks=1 00:13:45.114 --rc geninfo_unexecuted_blocks=1 00:13:45.114 00:13:45.114 ' 00:13:45.114 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:45.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.114 --rc genhtml_branch_coverage=1 00:13:45.115 --rc genhtml_function_coverage=1 00:13:45.115 --rc genhtml_legend=1 00:13:45.115 --rc geninfo_all_blocks=1 00:13:45.115 --rc geninfo_unexecuted_blocks=1 00:13:45.115 00:13:45.115 ' 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:45.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:13:45.115 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:53.260 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:53.260 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:53.260 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:53.260 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:53.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:53.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:13:53.260 00:13:53.260 --- 10.0.0.2 ping statistics --- 00:13:53.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.260 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:53.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:53.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:13:53.260 00:13:53.260 --- 10.0.0.1 ping statistics --- 00:13:53.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.260 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1884730 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1884730 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1884730 ']' 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:53.260 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.260 [2024-11-28 08:12:50.024993] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:13:53.260 [2024-11-28 08:12:50.025066] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.260 [2024-11-28 08:12:50.127404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:53.260 [2024-11-28 08:12:50.183093] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:53.261 [2024-11-28 08:12:50.183150] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:53.261 [2024-11-28 08:12:50.183167] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:53.261 [2024-11-28 08:12:50.183175] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:53.261 [2024-11-28 08:12:50.183182] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:53.261 [2024-11-28 08:12:50.185506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.261 [2024-11-28 08:12:50.185671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:53.261 [2024-11-28 08:12:50.185837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.261 [2024-11-28 08:12:50.185837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:53.833 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:53.833 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:53.833 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:53.833 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:53.833 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.833 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.833 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:53.833 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.833 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.833 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.833 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:53.833 "tick_rate": 2400000000, 00:13:53.833 "poll_groups": [ 00:13:53.833 { 00:13:53.833 "name": "nvmf_tgt_poll_group_000", 00:13:53.833 "admin_qpairs": 0, 00:13:53.833 "io_qpairs": 0, 00:13:53.833 "current_admin_qpairs": 0, 00:13:53.833 "current_io_qpairs": 0, 00:13:53.833 "pending_bdev_io": 0, 00:13:53.833 "completed_nvme_io": 0, 00:13:53.833 "transports": [] 00:13:53.833 }, 00:13:53.833 { 00:13:53.833 "name": "nvmf_tgt_poll_group_001", 00:13:53.833 "admin_qpairs": 0, 00:13:53.833 "io_qpairs": 0, 00:13:53.833 "current_admin_qpairs": 0, 00:13:53.833 "current_io_qpairs": 0, 00:13:53.833 "pending_bdev_io": 0, 00:13:53.833 "completed_nvme_io": 0, 00:13:53.833 "transports": [] 00:13:53.833 }, 00:13:53.833 { 00:13:53.833 "name": "nvmf_tgt_poll_group_002", 00:13:53.833 "admin_qpairs": 0, 00:13:53.833 "io_qpairs": 0, 00:13:53.833 "current_admin_qpairs": 0, 00:13:53.833 "current_io_qpairs": 0, 00:13:53.833 "pending_bdev_io": 0, 00:13:53.833 "completed_nvme_io": 0, 00:13:53.833 "transports": [] 00:13:53.833 }, 00:13:53.833 { 00:13:53.833 "name": "nvmf_tgt_poll_group_003", 00:13:53.833 "admin_qpairs": 0, 00:13:53.833 "io_qpairs": 0, 00:13:53.833 "current_admin_qpairs": 0, 00:13:53.833 "current_io_qpairs": 0, 00:13:53.833 "pending_bdev_io": 0, 00:13:53.833 "completed_nvme_io": 0, 00:13:53.833 "transports": [] 00:13:53.833 } 00:13:53.833 ] 00:13:53.833 }' 00:13:53.833 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:53.833 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:53.833 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:53.833 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:53.834 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:53.834 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:53.834 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:53.834 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:53.834 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.834 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.834 [2024-11-28 08:12:51.026904] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:53.834 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.834 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:53.834 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.834 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.834 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.834 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:53.834 "tick_rate": 2400000000, 00:13:53.834 "poll_groups": [ 00:13:53.834 { 00:13:53.834 "name": "nvmf_tgt_poll_group_000", 00:13:53.834 "admin_qpairs": 0, 00:13:53.834 "io_qpairs": 0, 00:13:53.834 "current_admin_qpairs": 0, 00:13:53.834 "current_io_qpairs": 0, 00:13:53.834 "pending_bdev_io": 0, 00:13:53.834 "completed_nvme_io": 0, 00:13:53.834 "transports": [ 00:13:53.834 { 00:13:53.834 "trtype": "TCP" 00:13:53.834 } 00:13:53.834 ] 00:13:53.834 }, 00:13:53.834 { 00:13:53.834 "name": "nvmf_tgt_poll_group_001", 00:13:53.834 "admin_qpairs": 0, 00:13:53.834 "io_qpairs": 0, 00:13:53.834 "current_admin_qpairs": 0, 00:13:53.834 "current_io_qpairs": 0, 00:13:53.834 "pending_bdev_io": 0, 00:13:53.834 "completed_nvme_io": 0, 00:13:53.834 "transports": [ 00:13:53.834 { 00:13:53.834 "trtype": "TCP" 00:13:53.834 } 00:13:53.834 ] 00:13:53.834 }, 00:13:53.834 { 00:13:53.834 "name": "nvmf_tgt_poll_group_002", 00:13:53.834 "admin_qpairs": 0, 00:13:53.834 "io_qpairs": 0, 00:13:53.834 "current_admin_qpairs": 0, 00:13:53.834 "current_io_qpairs": 0, 00:13:53.834 "pending_bdev_io": 0, 00:13:53.834 "completed_nvme_io": 0, 00:13:53.834 "transports": [ 00:13:53.834 { 00:13:53.834 "trtype": "TCP" 00:13:53.834 } 00:13:53.834 ] 00:13:53.834 }, 00:13:53.834 { 00:13:53.834 "name": "nvmf_tgt_poll_group_003", 00:13:53.834 "admin_qpairs": 0, 00:13:53.834 "io_qpairs": 0, 00:13:53.834 "current_admin_qpairs": 0, 00:13:53.834 "current_io_qpairs": 0, 00:13:53.834 "pending_bdev_io": 0, 00:13:53.834 "completed_nvme_io": 0, 00:13:53.834 "transports": [ 00:13:53.834 { 00:13:53.834 "trtype": "TCP" 00:13:53.834 } 00:13:53.834 ] 00:13:53.834 } 00:13:53.834 ] 00:13:53.834 }' 00:13:53.834 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:53.834 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:53.834 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:53.834 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:53.834 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:53.834 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:53.834 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:53.834 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:53.834 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.095 Malloc1 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.095 [2024-11-28 08:12:51.239660] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:54.095 [2024-11-28 08:12:51.276818] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:13:54.095 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:54.095 could not add new controller: failed to write to nvme-fabrics device 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.095 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:56.011 08:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:56.011 08:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:56.011 08:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:56.011 08:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:56.011 08:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:57.921 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:57.921 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:57.921 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:57.921 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:57.921 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:57.921 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:57.921 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:57.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.921 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:57.921 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:57.921 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:57.921 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:57.921 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:57.921 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:57.921 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:57.921 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:57.921 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.921 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.921 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.921 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:57.921 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:57.921 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:57.921 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:57.921 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:57.921 08:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:57.921 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:57.921 08:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:57.921 08:12:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:57.921 08:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:57.921 08:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:57.921 08:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:57.921 [2024-11-28 08:12:55.024230] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:13:57.921 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:57.921 could not add new controller: failed to write to nvme-fabrics device 00:13:57.921 08:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:57.921 08:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:57.921 08:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:57.921 08:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:57.921 08:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:57.921 08:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.921 08:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.921 08:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.921 08:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:59.301 08:12:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:59.301 08:12:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:59.301 08:12:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:59.301 08:12:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:59.301 08:12:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:01.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.846 [2024-11-28 08:12:58.755945] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.846 08:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:03.235 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:03.235 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:03.235 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:03.235 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:03.235 08:13:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:05.151 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:05.151 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:05.151 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:05.151 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:05.151 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:05.151 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:05.151 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:05.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.413 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:05.413 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:05.413 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:05.413 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:05.413 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:05.413 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:05.413 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:05.413 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:05.413 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.413 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.413 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.413 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:05.413 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.413 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.413 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.413 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:05.413 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:05.413 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.413 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.413 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.413 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:05.413 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.413 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.413 [2024-11-28 08:13:02.520539] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:05.413 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.413 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:05.413 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.413 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.413 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.413 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:05.413 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.413 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.413 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.413 08:13:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:07.330 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:07.330 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:07.330 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:07.330 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:07.330 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:09.245 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:09.245 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:09.245 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:09.245 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:09.245 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:09.245 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:09.245 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:09.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.245 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:09.245 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:09.245 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:09.245 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:09.245 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:09.245 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:09.245 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:09.245 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:09.245 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.245 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.245 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.245 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:09.245 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.245 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.245 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.245 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:09.245 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:09.245 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.245 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.245 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.246 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:09.246 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.246 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.246 [2024-11-28 08:13:06.287156] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:09.246 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.246 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:09.246 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.246 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.246 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.246 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:09.246 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.246 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:09.246 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.246 08:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:10.632 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:10.632 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:10.632 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:10.632 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:10.632 08:13:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:13.204 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:13.204 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:13.204 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:13.204 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:13.204 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:13.204 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:13.204 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:13.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.204 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:13.204 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:13.204 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:13.204 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:13.204 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:13.204 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:13.204 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:13.204 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:13.204 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.204 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.204 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.204 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:13.204 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.204 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.204 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.204 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:13.204 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:13.204 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.204 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.204 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.204 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:13.204 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.204 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.204 [2024-11-28 08:13:10.088463] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:13.204 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.204 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:13.204 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.204 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.204 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.204 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:13.204 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.204 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.204 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.204 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:14.592 08:13:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:14.592 08:13:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:14.592 08:13:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:14.592 08:13:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:14.592 08:13:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:16.504 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:16.504 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:16.504 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:16.504 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:16.504 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:16.504 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:16.505 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:16.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.505 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:16.505 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:16.505 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:16.505 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:16.505 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:16.505 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:16.505 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:16.505 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:16.505 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.505 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.505 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.505 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:16.505 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.505 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.505 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.505 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:16.505 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:16.505 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.765 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.765 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.765 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:16.765 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.765 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.765 [2024-11-28 08:13:13.811031] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:16.765 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.765 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:16.765 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.765 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.765 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.765 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:16.765 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.765 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.765 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.765 08:13:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:18.148 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:18.148 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:18.148 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:18.148 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:18.148 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:20.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.691 [2024-11-28 08:13:17.581339] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.691 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.692 [2024-11-28 08:13:17.645483] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.692 [2024-11-28 08:13:17.717685] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.692 [2024-11-28 08:13:17.789920] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.692 [2024-11-28 08:13:17.854104] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.692 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.693 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:20.693 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.693 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.693 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.693 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:20.693 "tick_rate": 2400000000, 00:14:20.693 "poll_groups": [ 00:14:20.693 { 00:14:20.693 "name": "nvmf_tgt_poll_group_000", 00:14:20.693 "admin_qpairs": 0, 00:14:20.693 "io_qpairs": 224, 00:14:20.693 "current_admin_qpairs": 0, 00:14:20.693 "current_io_qpairs": 0, 00:14:20.693 "pending_bdev_io": 0, 00:14:20.693 "completed_nvme_io": 229, 00:14:20.693 "transports": [ 00:14:20.693 { 00:14:20.693 "trtype": "TCP" 00:14:20.693 } 00:14:20.693 ] 00:14:20.693 }, 00:14:20.693 { 00:14:20.693 "name": "nvmf_tgt_poll_group_001", 00:14:20.693 "admin_qpairs": 1, 00:14:20.693 "io_qpairs": 223, 00:14:20.693 "current_admin_qpairs": 0, 00:14:20.693 "current_io_qpairs": 0, 00:14:20.693 "pending_bdev_io": 0, 00:14:20.693 "completed_nvme_io": 272, 00:14:20.693 "transports": [ 00:14:20.693 { 00:14:20.693 "trtype": "TCP" 00:14:20.693 } 00:14:20.693 ] 00:14:20.693 }, 00:14:20.693 { 00:14:20.693 "name": "nvmf_tgt_poll_group_002", 00:14:20.693 "admin_qpairs": 6, 00:14:20.693 "io_qpairs": 218, 00:14:20.693 "current_admin_qpairs": 0, 00:14:20.693 "current_io_qpairs": 0, 00:14:20.693 "pending_bdev_io": 0, 00:14:20.693 "completed_nvme_io": 513, 00:14:20.693 "transports": [ 00:14:20.693 { 00:14:20.693 "trtype": "TCP" 00:14:20.693 } 00:14:20.693 ] 00:14:20.693 }, 00:14:20.693 { 00:14:20.693 "name": "nvmf_tgt_poll_group_003", 00:14:20.693 "admin_qpairs": 0, 00:14:20.693 "io_qpairs": 224, 00:14:20.693 "current_admin_qpairs": 0, 00:14:20.693 "current_io_qpairs": 0, 00:14:20.693 "pending_bdev_io": 0, 00:14:20.693 "completed_nvme_io": 225, 00:14:20.693 "transports": [ 00:14:20.693 { 00:14:20.693 "trtype": "TCP" 00:14:20.693 } 00:14:20.693 ] 00:14:20.693 } 00:14:20.693 ] 00:14:20.693 }' 00:14:20.693 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:20.693 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:20.693 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:20.693 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:20.693 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:20.693 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:20.693 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:20.953 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:20.953 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:20.953 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:14:20.953 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:20.953 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:20.953 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:20.953 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:20.953 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:14:20.953 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:20.953 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:14:20.953 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:20.953 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:20.953 rmmod nvme_tcp 00:14:20.953 rmmod nvme_fabrics 00:14:20.953 rmmod nvme_keyring 00:14:20.953 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:20.953 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:14:20.953 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:14:20.953 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1884730 ']' 00:14:20.953 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1884730 00:14:20.953 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1884730 ']' 00:14:20.953 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1884730 00:14:20.953 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:14:20.953 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:20.953 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1884730 00:14:20.954 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:20.954 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:20.954 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1884730' 00:14:20.954 killing process with pid 1884730 00:14:20.954 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1884730 00:14:20.954 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1884730 00:14:21.215 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:21.215 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:21.215 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:21.215 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:14:21.215 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:14:21.215 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:21.215 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:14:21.215 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:21.215 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:21.215 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.215 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:21.215 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.131 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:23.131 00:14:23.131 real 0m38.205s 00:14:23.131 user 1m54.281s 00:14:23.131 sys 0m8.016s 00:14:23.131 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:23.131 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.131 ************************************ 00:14:23.131 END TEST nvmf_rpc 00:14:23.131 ************************************ 00:14:23.131 08:13:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:23.131 08:13:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:23.131 08:13:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:23.131 08:13:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:23.394 ************************************ 00:14:23.394 START TEST nvmf_invalid 00:14:23.394 ************************************ 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:23.394 * Looking for test storage... 00:14:23.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:23.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.394 --rc genhtml_branch_coverage=1 00:14:23.394 --rc genhtml_function_coverage=1 00:14:23.394 --rc genhtml_legend=1 00:14:23.394 --rc geninfo_all_blocks=1 00:14:23.394 --rc geninfo_unexecuted_blocks=1 00:14:23.394 00:14:23.394 ' 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:23.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.394 --rc genhtml_branch_coverage=1 00:14:23.394 --rc genhtml_function_coverage=1 00:14:23.394 --rc genhtml_legend=1 00:14:23.394 --rc geninfo_all_blocks=1 00:14:23.394 --rc geninfo_unexecuted_blocks=1 00:14:23.394 00:14:23.394 ' 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:23.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.394 --rc genhtml_branch_coverage=1 00:14:23.394 --rc genhtml_function_coverage=1 00:14:23.394 --rc genhtml_legend=1 00:14:23.394 --rc geninfo_all_blocks=1 00:14:23.394 --rc geninfo_unexecuted_blocks=1 00:14:23.394 00:14:23.394 ' 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:23.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.394 --rc genhtml_branch_coverage=1 00:14:23.394 --rc genhtml_function_coverage=1 00:14:23.394 --rc genhtml_legend=1 00:14:23.394 --rc geninfo_all_blocks=1 00:14:23.394 --rc geninfo_unexecuted_blocks=1 00:14:23.394 00:14:23.394 ' 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.394 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:23.395 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:23.395 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.657 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:23.657 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:23.657 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:14:23.657 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:31.816 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:31.816 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:14:31.816 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:31.816 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:31.816 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:31.816 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:31.816 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:31.816 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:14:31.816 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:31.817 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:31.817 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:31.817 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:31.817 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:31.817 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:31.817 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:31.817 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:31.817 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:31.817 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:31.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:31.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:14:31.817 00:14:31.817 --- 10.0.0.2 ping statistics --- 00:14:31.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.817 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:14:31.817 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:31.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:31.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:14:31.817 00:14:31.817 --- 10.0.0.1 ping statistics --- 00:14:31.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.817 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:14:31.817 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:31.817 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:14:31.817 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:31.817 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:31.817 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:31.817 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:31.817 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:31.817 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:31.817 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:31.817 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:31.818 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:31.818 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:31.818 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:31.818 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1894581 00:14:31.818 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1894581 00:14:31.818 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:31.818 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1894581 ']' 00:14:31.818 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.818 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:31.818 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.818 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:31.818 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:31.818 [2024-11-28 08:13:28.212850] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:14:31.818 [2024-11-28 08:13:28.212921] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.818 [2024-11-28 08:13:28.311752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:31.818 [2024-11-28 08:13:28.364698] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.818 [2024-11-28 08:13:28.364750] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.818 [2024-11-28 08:13:28.364759] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:31.818 [2024-11-28 08:13:28.364766] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:31.818 [2024-11-28 08:13:28.364778] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.818 [2024-11-28 08:13:28.366829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.818 [2024-11-28 08:13:28.366990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:31.818 [2024-11-28 08:13:28.367151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.818 [2024-11-28 08:13:28.367152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:31.818 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:31.818 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:14:31.818 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:31.818 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:31.818 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:31.818 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:31.818 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:31.818 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode24332 00:14:32.081 [2024-11-28 08:13:29.252890] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:32.081 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:32.081 { 00:14:32.081 "nqn": "nqn.2016-06.io.spdk:cnode24332", 00:14:32.081 "tgt_name": "foobar", 00:14:32.081 "method": "nvmf_create_subsystem", 00:14:32.081 "req_id": 1 00:14:32.081 } 00:14:32.081 Got JSON-RPC error response 00:14:32.081 response: 00:14:32.081 { 00:14:32.081 "code": -32603, 00:14:32.081 "message": "Unable to find target foobar" 00:14:32.081 }' 00:14:32.081 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:32.081 { 00:14:32.081 "nqn": "nqn.2016-06.io.spdk:cnode24332", 00:14:32.081 "tgt_name": "foobar", 00:14:32.081 "method": "nvmf_create_subsystem", 00:14:32.081 "req_id": 1 00:14:32.081 } 00:14:32.081 Got JSON-RPC error response 00:14:32.081 response: 00:14:32.081 { 00:14:32.081 "code": -32603, 00:14:32.081 "message": "Unable to find target foobar" 00:14:32.081 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:32.081 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:32.081 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode21485 00:14:32.342 [2024-11-28 08:13:29.461755] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21485: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:32.342 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:32.342 { 00:14:32.342 "nqn": "nqn.2016-06.io.spdk:cnode21485", 00:14:32.342 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:32.342 "method": "nvmf_create_subsystem", 00:14:32.343 "req_id": 1 00:14:32.343 } 00:14:32.343 Got JSON-RPC error response 00:14:32.343 response: 00:14:32.343 { 00:14:32.343 "code": -32602, 00:14:32.343 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:32.343 }' 00:14:32.343 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:32.343 { 00:14:32.343 "nqn": "nqn.2016-06.io.spdk:cnode21485", 00:14:32.343 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:32.343 "method": "nvmf_create_subsystem", 00:14:32.343 "req_id": 1 00:14:32.343 } 00:14:32.343 Got JSON-RPC error response 00:14:32.343 response: 00:14:32.343 { 00:14:32.343 "code": -32602, 00:14:32.343 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:32.343 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:32.343 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:32.343 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode3649 00:14:32.605 [2024-11-28 08:13:29.670461] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3649: invalid model number 'SPDK_Controller' 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:32.605 { 00:14:32.605 "nqn": "nqn.2016-06.io.spdk:cnode3649", 00:14:32.605 "model_number": "SPDK_Controller\u001f", 00:14:32.605 "method": "nvmf_create_subsystem", 00:14:32.605 "req_id": 1 00:14:32.605 } 00:14:32.605 Got JSON-RPC error response 00:14:32.605 response: 00:14:32.605 { 00:14:32.605 "code": -32602, 00:14:32.605 "message": "Invalid MN SPDK_Controller\u001f" 00:14:32.605 }' 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:32.605 { 00:14:32.605 "nqn": "nqn.2016-06.io.spdk:cnode3649", 00:14:32.605 "model_number": "SPDK_Controller\u001f", 00:14:32.605 "method": "nvmf_create_subsystem", 00:14:32.605 "req_id": 1 00:14:32.605 } 00:14:32.605 Got JSON-RPC error response 00:14:32.605 response: 00:14:32.605 { 00:14:32.605 "code": -32602, 00:14:32.605 "message": "Invalid MN SPDK_Controller\u001f" 00:14:32.605 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.605 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ' == \- ]] 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ''\''AnNr(}U;*kZXL3*.r~:#' 00:14:32.606 08:13:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ''\''AnNr(}U;*kZXL3*.r~:#' nqn.2016-06.io.spdk:cnode32692 00:14:32.867 [2024-11-28 08:13:30.055899] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32692: invalid serial number ''AnNr(}U;*kZXL3*.r~:#' 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:32.868 { 00:14:32.868 "nqn": "nqn.2016-06.io.spdk:cnode32692", 00:14:32.868 "serial_number": "'\''AnNr(}U;*kZXL3*.r~:#", 00:14:32.868 "method": "nvmf_create_subsystem", 00:14:32.868 "req_id": 1 00:14:32.868 } 00:14:32.868 Got JSON-RPC error response 00:14:32.868 response: 00:14:32.868 { 00:14:32.868 "code": -32602, 00:14:32.868 "message": "Invalid SN '\''AnNr(}U;*kZXL3*.r~:#" 00:14:32.868 }' 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:32.868 { 00:14:32.868 "nqn": "nqn.2016-06.io.spdk:cnode32692", 00:14:32.868 "serial_number": "'AnNr(}U;*kZXL3*.r~:#", 00:14:32.868 "method": "nvmf_create_subsystem", 00:14:32.868 "req_id": 1 00:14:32.868 } 00:14:32.868 Got JSON-RPC error response 00:14:32.868 response: 00:14:32.868 { 00:14:32.868 "code": -32602, 00:14:32.868 "message": "Invalid SN 'AnNr(}U;*kZXL3*.r~:#" 00:14:32.868 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:32.868 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:14:33.130 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:14:33.130 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:14:33.130 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.130 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.130 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:14:33.130 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:14:33.130 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:14:33.130 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.130 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.130 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:14:33.130 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:14:33.130 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:14:33.130 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.130 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.130 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:14:33.130 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:14:33.130 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:14:33.130 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:14:33.131 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:14:33.132 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.132 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.132 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:33.132 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:33.132 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:33.132 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.132 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.132 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:14:33.132 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:14:33.132 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:14:33.132 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.132 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.132 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:14:33.132 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:14:33.132 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:14:33.132 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.132 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.132 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:14:33.132 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:33.132 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:14:33.132 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.132 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.132 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:14:33.132 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:14:33.393 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:14:33.393 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.393 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.393 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:14:33.393 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:33.393 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:14:33.393 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.393 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.393 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:14:33.393 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:33.393 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:14:33.393 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:33.393 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:33.393 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ l == \- ]] 00:14:33.393 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'lnxoI_Zvi-39_[R;eP{!'\''=#-+DYO@aB^^y'\''S1/>DF' 00:14:33.393 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'lnxoI_Zvi-39_[R;eP{!'\''=#-+DYO@aB^^y'\''S1/>DF' nqn.2016-06.io.spdk:cnode27427 00:14:33.393 [2024-11-28 08:13:30.601941] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27427: invalid model number 'lnxoI_Zvi-39_[R;eP{!'=#-+DYO@aB^^y'S1/>DF' 00:14:33.393 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:14:33.393 { 00:14:33.393 "nqn": "nqn.2016-06.io.spdk:cnode27427", 00:14:33.393 "model_number": "lnxoI_Zvi-39_[R;eP{!'\''=#-+DYO@aB^^y'\''S1/>DF", 00:14:33.393 "method": "nvmf_create_subsystem", 00:14:33.393 "req_id": 1 00:14:33.393 } 00:14:33.393 Got JSON-RPC error response 00:14:33.393 response: 00:14:33.393 { 00:14:33.393 "code": -32602, 00:14:33.393 "message": "Invalid MN lnxoI_Zvi-39_[R;eP{!'\''=#-+DYO@aB^^y'\''S1/>DF" 00:14:33.393 }' 00:14:33.393 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:14:33.393 { 00:14:33.393 "nqn": "nqn.2016-06.io.spdk:cnode27427", 00:14:33.393 "model_number": "lnxoI_Zvi-39_[R;eP{!'=#-+DYO@aB^^y'S1/>DF", 00:14:33.393 "method": "nvmf_create_subsystem", 00:14:33.393 "req_id": 1 00:14:33.393 } 00:14:33.393 Got JSON-RPC error response 00:14:33.393 response: 00:14:33.393 { 00:14:33.393 "code": -32602, 00:14:33.393 "message": "Invalid MN lnxoI_Zvi-39_[R;eP{!'=#-+DYO@aB^^y'S1/>DF" 00:14:33.394 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:33.394 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:33.654 [2024-11-28 08:13:30.806801] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:33.654 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:33.916 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:33.916 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:14:33.916 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:14:33.916 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:14:33.916 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:34.176 [2024-11-28 08:13:31.220319] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:34.176 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:14:34.176 { 00:14:34.177 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:34.177 "listen_address": { 00:14:34.177 "trtype": "tcp", 00:14:34.177 "traddr": "", 00:14:34.177 "trsvcid": "4421" 00:14:34.177 }, 00:14:34.177 "method": "nvmf_subsystem_remove_listener", 00:14:34.177 "req_id": 1 00:14:34.177 } 00:14:34.177 Got JSON-RPC error response 00:14:34.177 response: 00:14:34.177 { 00:14:34.177 "code": -32602, 00:14:34.177 "message": "Invalid parameters" 00:14:34.177 }' 00:14:34.177 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:14:34.177 { 00:14:34.177 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:34.177 "listen_address": { 00:14:34.177 "trtype": "tcp", 00:14:34.177 "traddr": "", 00:14:34.177 "trsvcid": "4421" 00:14:34.177 }, 00:14:34.177 "method": "nvmf_subsystem_remove_listener", 00:14:34.177 "req_id": 1 00:14:34.177 } 00:14:34.177 Got JSON-RPC error response 00:14:34.177 response: 00:14:34.177 { 00:14:34.177 "code": -32602, 00:14:34.177 "message": "Invalid parameters" 00:14:34.177 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:34.177 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23442 -i 0 00:14:34.177 [2024-11-28 08:13:31.404885] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23442: invalid cntlid range [0-65519] 00:14:34.177 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:14:34.177 { 00:14:34.177 "nqn": "nqn.2016-06.io.spdk:cnode23442", 00:14:34.177 "min_cntlid": 0, 00:14:34.177 "method": "nvmf_create_subsystem", 00:14:34.177 "req_id": 1 00:14:34.177 } 00:14:34.177 Got JSON-RPC error response 00:14:34.177 response: 00:14:34.177 { 00:14:34.177 "code": -32602, 00:14:34.177 "message": "Invalid cntlid range [0-65519]" 00:14:34.177 }' 00:14:34.177 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:14:34.177 { 00:14:34.177 "nqn": "nqn.2016-06.io.spdk:cnode23442", 00:14:34.177 "min_cntlid": 0, 00:14:34.177 "method": "nvmf_create_subsystem", 00:14:34.177 "req_id": 1 00:14:34.177 } 00:14:34.177 Got JSON-RPC error response 00:14:34.177 response: 00:14:34.177 { 00:14:34.177 "code": -32602, 00:14:34.177 "message": "Invalid cntlid range [0-65519]" 00:14:34.177 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:34.177 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7199 -i 65520 00:14:34.437 [2024-11-28 08:13:31.593461] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7199: invalid cntlid range [65520-65519] 00:14:34.437 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:14:34.437 { 00:14:34.437 "nqn": "nqn.2016-06.io.spdk:cnode7199", 00:14:34.437 "min_cntlid": 65520, 00:14:34.437 "method": "nvmf_create_subsystem", 00:14:34.437 "req_id": 1 00:14:34.437 } 00:14:34.437 Got JSON-RPC error response 00:14:34.437 response: 00:14:34.437 { 00:14:34.437 "code": -32602, 00:14:34.437 "message": "Invalid cntlid range [65520-65519]" 00:14:34.437 }' 00:14:34.437 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:14:34.437 { 00:14:34.437 "nqn": "nqn.2016-06.io.spdk:cnode7199", 00:14:34.437 "min_cntlid": 65520, 00:14:34.437 "method": "nvmf_create_subsystem", 00:14:34.437 "req_id": 1 00:14:34.437 } 00:14:34.438 Got JSON-RPC error response 00:14:34.438 response: 00:14:34.438 { 00:14:34.438 "code": -32602, 00:14:34.438 "message": "Invalid cntlid range [65520-65519]" 00:14:34.438 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:34.438 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10353 -I 0 00:14:34.698 [2024-11-28 08:13:31.782043] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10353: invalid cntlid range [1-0] 00:14:34.698 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:14:34.698 { 00:14:34.698 "nqn": "nqn.2016-06.io.spdk:cnode10353", 00:14:34.698 "max_cntlid": 0, 00:14:34.698 "method": "nvmf_create_subsystem", 00:14:34.698 "req_id": 1 00:14:34.698 } 00:14:34.698 Got JSON-RPC error response 00:14:34.698 response: 00:14:34.698 { 00:14:34.698 "code": -32602, 00:14:34.698 "message": "Invalid cntlid range [1-0]" 00:14:34.698 }' 00:14:34.698 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:14:34.698 { 00:14:34.698 "nqn": "nqn.2016-06.io.spdk:cnode10353", 00:14:34.698 "max_cntlid": 0, 00:14:34.698 "method": "nvmf_create_subsystem", 00:14:34.698 "req_id": 1 00:14:34.698 } 00:14:34.698 Got JSON-RPC error response 00:14:34.698 response: 00:14:34.698 { 00:14:34.698 "code": -32602, 00:14:34.698 "message": "Invalid cntlid range [1-0]" 00:14:34.698 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:34.698 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17233 -I 65520 00:14:34.698 [2024-11-28 08:13:31.966625] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17233: invalid cntlid range [1-65520] 00:14:34.959 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:14:34.959 { 00:14:34.959 "nqn": "nqn.2016-06.io.spdk:cnode17233", 00:14:34.959 "max_cntlid": 65520, 00:14:34.959 "method": "nvmf_create_subsystem", 00:14:34.959 "req_id": 1 00:14:34.959 } 00:14:34.959 Got JSON-RPC error response 00:14:34.959 response: 00:14:34.959 { 00:14:34.959 "code": -32602, 00:14:34.959 "message": "Invalid cntlid range [1-65520]" 00:14:34.959 }' 00:14:34.959 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:14:34.959 { 00:14:34.959 "nqn": "nqn.2016-06.io.spdk:cnode17233", 00:14:34.959 "max_cntlid": 65520, 00:14:34.959 "method": "nvmf_create_subsystem", 00:14:34.959 "req_id": 1 00:14:34.959 } 00:14:34.959 Got JSON-RPC error response 00:14:34.959 response: 00:14:34.959 { 00:14:34.959 "code": -32602, 00:14:34.959 "message": "Invalid cntlid range [1-65520]" 00:14:34.959 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:34.959 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13747 -i 6 -I 5 00:14:34.959 [2024-11-28 08:13:32.147191] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13747: invalid cntlid range [6-5] 00:14:34.959 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:14:34.959 { 00:14:34.959 "nqn": "nqn.2016-06.io.spdk:cnode13747", 00:14:34.959 "min_cntlid": 6, 00:14:34.959 "max_cntlid": 5, 00:14:34.959 "method": "nvmf_create_subsystem", 00:14:34.959 "req_id": 1 00:14:34.959 } 00:14:34.959 Got JSON-RPC error response 00:14:34.959 response: 00:14:34.959 { 00:14:34.959 "code": -32602, 00:14:34.959 "message": "Invalid cntlid range [6-5]" 00:14:34.959 }' 00:14:34.959 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:14:34.959 { 00:14:34.959 "nqn": "nqn.2016-06.io.spdk:cnode13747", 00:14:34.959 "min_cntlid": 6, 00:14:34.959 "max_cntlid": 5, 00:14:34.959 "method": "nvmf_create_subsystem", 00:14:34.959 "req_id": 1 00:14:34.959 } 00:14:34.959 Got JSON-RPC error response 00:14:34.959 response: 00:14:34.959 { 00:14:34.959 "code": -32602, 00:14:34.959 "message": "Invalid cntlid range [6-5]" 00:14:34.959 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:34.959 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:14:35.260 { 00:14:35.260 "name": "foobar", 00:14:35.260 "method": "nvmf_delete_target", 00:14:35.260 "req_id": 1 00:14:35.260 } 00:14:35.260 Got JSON-RPC error response 00:14:35.260 response: 00:14:35.260 { 00:14:35.260 "code": -32602, 00:14:35.260 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:35.260 }' 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:14:35.260 { 00:14:35.260 "name": "foobar", 00:14:35.260 "method": "nvmf_delete_target", 00:14:35.260 "req_id": 1 00:14:35.260 } 00:14:35.260 Got JSON-RPC error response 00:14:35.260 response: 00:14:35.260 { 00:14:35.260 "code": -32602, 00:14:35.260 "message": "The specified target doesn't exist, cannot delete it." 00:14:35.260 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:35.260 rmmod nvme_tcp 00:14:35.260 rmmod nvme_fabrics 00:14:35.260 rmmod nvme_keyring 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 1894581 ']' 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 1894581 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 1894581 ']' 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 1894581 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1894581 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1894581' 00:14:35.260 killing process with pid 1894581 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 1894581 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 1894581 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:35.260 08:13:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.901 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:37.901 00:14:37.901 real 0m14.162s 00:14:37.901 user 0m21.243s 00:14:37.901 sys 0m6.707s 00:14:37.901 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:37.901 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:37.901 ************************************ 00:14:37.901 END TEST nvmf_invalid 00:14:37.901 ************************************ 00:14:37.901 08:13:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:37.901 08:13:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:37.901 08:13:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:37.901 08:13:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:37.901 ************************************ 00:14:37.901 START TEST nvmf_connect_stress 00:14:37.901 ************************************ 00:14:37.901 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:37.901 * Looking for test storage... 00:14:37.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:37.901 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:37.901 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:14:37.901 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:37.901 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:37.901 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:37.901 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:37.901 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:37.901 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:14:37.901 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:14:37.901 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:14:37.901 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:14:37.901 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:14:37.901 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:14:37.901 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:14:37.901 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:37.901 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:14:37.901 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:14:37.901 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:37.901 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:37.901 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:14:37.901 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:14:37.901 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:37.901 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:14:37.901 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:37.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.902 --rc genhtml_branch_coverage=1 00:14:37.902 --rc genhtml_function_coverage=1 00:14:37.902 --rc genhtml_legend=1 00:14:37.902 --rc geninfo_all_blocks=1 00:14:37.902 --rc geninfo_unexecuted_blocks=1 00:14:37.902 00:14:37.902 ' 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:37.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.902 --rc genhtml_branch_coverage=1 00:14:37.902 --rc genhtml_function_coverage=1 00:14:37.902 --rc genhtml_legend=1 00:14:37.902 --rc geninfo_all_blocks=1 00:14:37.902 --rc geninfo_unexecuted_blocks=1 00:14:37.902 00:14:37.902 ' 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:37.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.902 --rc genhtml_branch_coverage=1 00:14:37.902 --rc genhtml_function_coverage=1 00:14:37.902 --rc genhtml_legend=1 00:14:37.902 --rc geninfo_all_blocks=1 00:14:37.902 --rc geninfo_unexecuted_blocks=1 00:14:37.902 00:14:37.902 ' 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:37.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.902 --rc genhtml_branch_coverage=1 00:14:37.902 --rc genhtml_function_coverage=1 00:14:37.902 --rc genhtml_legend=1 00:14:37.902 --rc geninfo_all_blocks=1 00:14:37.902 --rc geninfo_unexecuted_blocks=1 00:14:37.902 00:14:37.902 ' 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:37.902 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:37.902 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:37.903 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:37.903 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:37.903 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:37.903 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:37.903 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:37.903 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:37.903 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:37.903 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.903 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:37.903 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.903 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:37.903 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:37.903 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:14:37.903 08:13:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:46.050 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:46.050 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:46.050 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:46.050 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:46.050 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:46.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:46.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:14:46.051 00:14:46.051 --- 10.0.0.2 ping statistics --- 00:14:46.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.051 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:46.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:46.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:14:46.051 00:14:46.051 --- 10.0.0.1 ping statistics --- 00:14:46.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.051 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1899772 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1899772 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1899772 ']' 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:46.051 08:13:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.051 [2024-11-28 08:13:42.518917] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:14:46.051 [2024-11-28 08:13:42.518983] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.051 [2024-11-28 08:13:42.617474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:46.051 [2024-11-28 08:13:42.668749] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:46.051 [2024-11-28 08:13:42.668802] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:46.051 [2024-11-28 08:13:42.668810] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:46.051 [2024-11-28 08:13:42.668817] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:46.051 [2024-11-28 08:13:42.668824] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:46.051 [2024-11-28 08:13:42.670657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:46.051 [2024-11-28 08:13:42.670821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.051 [2024-11-28 08:13:42.670822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:46.313 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:46.313 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:14:46.313 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:46.313 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:46.313 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.313 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:46.313 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:46.313 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.313 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.313 [2024-11-28 08:13:43.390769] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:46.313 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.313 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:46.313 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.313 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.313 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.314 [2024-11-28 08:13:43.416355] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.314 NULL1 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1899811 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1899811 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.314 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.889 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.889 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1899811 00:14:46.889 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.889 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.889 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.150 08:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.150 08:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1899811 00:14:47.150 08:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.150 08:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.150 08:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.411 08:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.411 08:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1899811 00:14:47.411 08:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.411 08:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.411 08:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.672 08:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.672 08:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1899811 00:14:47.672 08:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.672 08:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.672 08:13:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.939 08:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.939 08:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1899811 00:14:47.939 08:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.939 08:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.939 08:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:48.510 08:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.510 08:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1899811 00:14:48.510 08:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.510 08:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.510 08:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:48.770 08:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.770 08:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1899811 00:14:48.770 08:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:48.770 08:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.770 08:13:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:49.030 08:13:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.030 08:13:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1899811 00:14:49.030 08:13:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:49.030 08:13:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.030 08:13:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:49.290 08:13:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.290 08:13:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1899811 00:14:49.290 08:13:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:49.290 08:13:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.290 08:13:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:49.551 08:13:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.551 08:13:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1899811 00:14:49.551 08:13:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:49.551 08:13:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.551 08:13:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:50.123 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.123 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1899811 00:14:50.123 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:50.123 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.123 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:50.383 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.383 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1899811 00:14:50.383 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:50.383 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.383 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:50.643 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.643 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1899811 00:14:50.643 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:50.643 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.643 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:50.902 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.902 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1899811 00:14:50.902 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:50.902 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.902 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.162 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.162 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1899811 00:14:51.162 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:51.162 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.162 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.733 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.733 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1899811 00:14:51.733 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:51.733 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.733 08:13:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:51.993 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.993 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1899811 00:14:51.993 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:51.993 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.993 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.253 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.253 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1899811 00:14:52.253 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:52.253 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.253 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.513 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.513 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1899811 00:14:52.513 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:52.513 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.513 08:13:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:52.774 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.774 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1899811 00:14:52.774 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:52.774 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.774 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:53.347 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.347 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1899811 00:14:53.347 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:53.347 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.347 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:53.608 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.608 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1899811 00:14:53.608 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:53.608 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.608 08:13:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:53.869 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.869 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1899811 00:14:53.869 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:53.869 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.869 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:54.139 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.139 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1899811 00:14:54.139 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:54.139 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.139 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:54.403 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.403 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1899811 00:14:54.403 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:54.403 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.403 08:13:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:54.976 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.976 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1899811 00:14:54.976 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:54.976 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.976 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:55.237 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.237 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1899811 00:14:55.237 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:55.237 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.237 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:55.498 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.498 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1899811 00:14:55.498 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:55.498 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.498 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:55.759 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.759 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1899811 00:14:55.759 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:55.759 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.759 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:56.020 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.280 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1899811 00:14:56.281 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:56.281 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.281 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:56.541 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:56.541 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.541 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1899811 00:14:56.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1899811) - No such process 00:14:56.541 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1899811 00:14:56.541 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:56.541 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:56.541 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:56.541 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:56.541 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:14:56.541 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:56.541 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:14:56.541 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:56.541 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:56.541 rmmod nvme_tcp 00:14:56.541 rmmod nvme_fabrics 00:14:56.541 rmmod nvme_keyring 00:14:56.541 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:56.541 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:14:56.541 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:14:56.541 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1899772 ']' 00:14:56.541 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1899772 00:14:56.541 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1899772 ']' 00:14:56.541 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1899772 00:14:56.541 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:14:56.541 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:56.541 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1899772 00:14:56.541 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:56.541 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:56.541 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1899772' 00:14:56.541 killing process with pid 1899772 00:14:56.541 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1899772 00:14:56.541 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1899772 00:14:56.801 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:56.801 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:56.801 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:56.801 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:14:56.801 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:56.801 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:14:56.801 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:14:56.801 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:56.801 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:56.801 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.801 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:56.801 08:13:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.715 08:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:58.715 00:14:58.715 real 0m21.278s 00:14:58.715 user 0m42.078s 00:14:58.715 sys 0m9.375s 00:14:58.715 08:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:58.715 08:13:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:58.715 ************************************ 00:14:58.715 END TEST nvmf_connect_stress 00:14:58.715 ************************************ 00:14:58.976 08:13:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:58.976 08:13:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:58.976 08:13:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:58.976 08:13:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:58.976 ************************************ 00:14:58.976 START TEST nvmf_fused_ordering 00:14:58.976 ************************************ 00:14:58.976 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:58.976 * Looking for test storage... 00:14:58.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:58.976 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:58.976 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:14:58.976 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:58.976 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:58.976 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:58.976 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:58.976 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:58.976 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:14:58.976 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:14:58.976 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:14:58.976 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:14:58.976 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:14:58.976 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:14:58.976 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:14:58.976 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:58.976 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:14:58.976 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:14:58.976 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:58.977 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:58.977 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:14:58.977 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:14:58.977 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:58.977 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:14:58.977 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:14:58.977 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:14:58.977 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:14:58.977 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:58.977 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:14:58.977 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:14:58.977 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:58.977 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:58.977 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:14:58.977 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:58.977 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:58.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.977 --rc genhtml_branch_coverage=1 00:14:58.977 --rc genhtml_function_coverage=1 00:14:58.977 --rc genhtml_legend=1 00:14:58.977 --rc geninfo_all_blocks=1 00:14:58.977 --rc geninfo_unexecuted_blocks=1 00:14:58.977 00:14:58.977 ' 00:14:58.977 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:58.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.977 --rc genhtml_branch_coverage=1 00:14:58.977 --rc genhtml_function_coverage=1 00:14:58.977 --rc genhtml_legend=1 00:14:58.977 --rc geninfo_all_blocks=1 00:14:58.977 --rc geninfo_unexecuted_blocks=1 00:14:58.977 00:14:58.977 ' 00:14:58.977 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:58.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.977 --rc genhtml_branch_coverage=1 00:14:58.977 --rc genhtml_function_coverage=1 00:14:58.977 --rc genhtml_legend=1 00:14:58.977 --rc geninfo_all_blocks=1 00:14:58.977 --rc geninfo_unexecuted_blocks=1 00:14:58.977 00:14:58.977 ' 00:14:58.977 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:58.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.977 --rc genhtml_branch_coverage=1 00:14:58.977 --rc genhtml_function_coverage=1 00:14:58.977 --rc genhtml_legend=1 00:14:58.977 --rc geninfo_all_blocks=1 00:14:58.977 --rc geninfo_unexecuted_blocks=1 00:14:58.977 00:14:58.977 ' 00:14:58.977 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:58.977 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:58.977 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:58.977 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:59.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:14:59.239 08:13:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:07.378 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:07.378 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:07.378 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:07.379 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:07.379 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:07.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:07.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:15:07.379 00:15:07.379 --- 10.0.0.2 ping statistics --- 00:15:07.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.379 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:07.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:07.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:15:07.379 00:15:07.379 --- 10.0.0.1 ping statistics --- 00:15:07.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.379 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1906163 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1906163 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1906163 ']' 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:07.379 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:07.379 [2024-11-28 08:14:03.909099] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:15:07.379 [2024-11-28 08:14:03.909169] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:07.379 [2024-11-28 08:14:03.994365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.379 [2024-11-28 08:14:04.046342] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:07.379 [2024-11-28 08:14:04.046405] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:07.379 [2024-11-28 08:14:04.046414] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:07.379 [2024-11-28 08:14:04.046422] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:07.379 [2024-11-28 08:14:04.046428] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:07.379 [2024-11-28 08:14:04.047199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:07.640 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:07.640 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:15:07.640 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:07.640 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:07.640 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:07.640 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:07.640 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:07.640 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.640 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:07.640 [2024-11-28 08:14:04.784213] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:07.640 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.640 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:07.640 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.640 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:07.640 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.640 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:07.640 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.640 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:07.640 [2024-11-28 08:14:04.808550] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:07.640 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.640 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:07.640 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.640 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:07.640 NULL1 00:15:07.640 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.641 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:07.641 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.641 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:07.641 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.641 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:07.641 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.641 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:07.641 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.641 08:14:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:07.641 [2024-11-28 08:14:04.880423] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:15:07.641 [2024-11-28 08:14:04.880464] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1906278 ] 00:15:08.211 Attached to nqn.2016-06.io.spdk:cnode1 00:15:08.211 Namespace ID: 1 size: 1GB 00:15:08.211 fused_ordering(0) 00:15:08.211 fused_ordering(1) 00:15:08.211 fused_ordering(2) 00:15:08.211 fused_ordering(3) 00:15:08.211 fused_ordering(4) 00:15:08.211 fused_ordering(5) 00:15:08.211 fused_ordering(6) 00:15:08.211 fused_ordering(7) 00:15:08.211 fused_ordering(8) 00:15:08.211 fused_ordering(9) 00:15:08.211 fused_ordering(10) 00:15:08.211 fused_ordering(11) 00:15:08.211 fused_ordering(12) 00:15:08.211 fused_ordering(13) 00:15:08.211 fused_ordering(14) 00:15:08.211 fused_ordering(15) 00:15:08.211 fused_ordering(16) 00:15:08.211 fused_ordering(17) 00:15:08.211 fused_ordering(18) 00:15:08.211 fused_ordering(19) 00:15:08.211 fused_ordering(20) 00:15:08.211 fused_ordering(21) 00:15:08.211 fused_ordering(22) 00:15:08.211 fused_ordering(23) 00:15:08.211 fused_ordering(24) 00:15:08.211 fused_ordering(25) 00:15:08.211 fused_ordering(26) 00:15:08.211 fused_ordering(27) 00:15:08.212 fused_ordering(28) 00:15:08.212 fused_ordering(29) 00:15:08.212 fused_ordering(30) 00:15:08.212 fused_ordering(31) 00:15:08.212 fused_ordering(32) 00:15:08.212 fused_ordering(33) 00:15:08.212 fused_ordering(34) 00:15:08.212 fused_ordering(35) 00:15:08.212 fused_ordering(36) 00:15:08.212 fused_ordering(37) 00:15:08.212 fused_ordering(38) 00:15:08.212 fused_ordering(39) 00:15:08.212 fused_ordering(40) 00:15:08.212 fused_ordering(41) 00:15:08.212 fused_ordering(42) 00:15:08.212 fused_ordering(43) 00:15:08.212 fused_ordering(44) 00:15:08.212 fused_ordering(45) 00:15:08.212 fused_ordering(46) 00:15:08.212 fused_ordering(47) 00:15:08.212 fused_ordering(48) 00:15:08.212 fused_ordering(49) 00:15:08.212 fused_ordering(50) 00:15:08.212 fused_ordering(51) 00:15:08.212 fused_ordering(52) 00:15:08.212 fused_ordering(53) 00:15:08.212 fused_ordering(54) 00:15:08.212 fused_ordering(55) 00:15:08.212 fused_ordering(56) 00:15:08.212 fused_ordering(57) 00:15:08.212 fused_ordering(58) 00:15:08.212 fused_ordering(59) 00:15:08.212 fused_ordering(60) 00:15:08.212 fused_ordering(61) 00:15:08.212 fused_ordering(62) 00:15:08.212 fused_ordering(63) 00:15:08.212 fused_ordering(64) 00:15:08.212 fused_ordering(65) 00:15:08.212 fused_ordering(66) 00:15:08.212 fused_ordering(67) 00:15:08.212 fused_ordering(68) 00:15:08.212 fused_ordering(69) 00:15:08.212 fused_ordering(70) 00:15:08.212 fused_ordering(71) 00:15:08.212 fused_ordering(72) 00:15:08.212 fused_ordering(73) 00:15:08.212 fused_ordering(74) 00:15:08.212 fused_ordering(75) 00:15:08.212 fused_ordering(76) 00:15:08.212 fused_ordering(77) 00:15:08.212 fused_ordering(78) 00:15:08.212 fused_ordering(79) 00:15:08.212 fused_ordering(80) 00:15:08.212 fused_ordering(81) 00:15:08.212 fused_ordering(82) 00:15:08.212 fused_ordering(83) 00:15:08.212 fused_ordering(84) 00:15:08.212 fused_ordering(85) 00:15:08.212 fused_ordering(86) 00:15:08.212 fused_ordering(87) 00:15:08.212 fused_ordering(88) 00:15:08.212 fused_ordering(89) 00:15:08.212 fused_ordering(90) 00:15:08.212 fused_ordering(91) 00:15:08.212 fused_ordering(92) 00:15:08.212 fused_ordering(93) 00:15:08.212 fused_ordering(94) 00:15:08.212 fused_ordering(95) 00:15:08.212 fused_ordering(96) 00:15:08.212 fused_ordering(97) 00:15:08.212 fused_ordering(98) 00:15:08.212 fused_ordering(99) 00:15:08.212 fused_ordering(100) 00:15:08.212 fused_ordering(101) 00:15:08.212 fused_ordering(102) 00:15:08.212 fused_ordering(103) 00:15:08.212 fused_ordering(104) 00:15:08.212 fused_ordering(105) 00:15:08.212 fused_ordering(106) 00:15:08.212 fused_ordering(107) 00:15:08.212 fused_ordering(108) 00:15:08.212 fused_ordering(109) 00:15:08.212 fused_ordering(110) 00:15:08.212 fused_ordering(111) 00:15:08.212 fused_ordering(112) 00:15:08.212 fused_ordering(113) 00:15:08.212 fused_ordering(114) 00:15:08.212 fused_ordering(115) 00:15:08.212 fused_ordering(116) 00:15:08.212 fused_ordering(117) 00:15:08.212 fused_ordering(118) 00:15:08.212 fused_ordering(119) 00:15:08.212 fused_ordering(120) 00:15:08.212 fused_ordering(121) 00:15:08.212 fused_ordering(122) 00:15:08.212 fused_ordering(123) 00:15:08.212 fused_ordering(124) 00:15:08.212 fused_ordering(125) 00:15:08.212 fused_ordering(126) 00:15:08.212 fused_ordering(127) 00:15:08.212 fused_ordering(128) 00:15:08.212 fused_ordering(129) 00:15:08.212 fused_ordering(130) 00:15:08.212 fused_ordering(131) 00:15:08.212 fused_ordering(132) 00:15:08.212 fused_ordering(133) 00:15:08.212 fused_ordering(134) 00:15:08.212 fused_ordering(135) 00:15:08.212 fused_ordering(136) 00:15:08.212 fused_ordering(137) 00:15:08.212 fused_ordering(138) 00:15:08.212 fused_ordering(139) 00:15:08.212 fused_ordering(140) 00:15:08.212 fused_ordering(141) 00:15:08.212 fused_ordering(142) 00:15:08.212 fused_ordering(143) 00:15:08.212 fused_ordering(144) 00:15:08.212 fused_ordering(145) 00:15:08.212 fused_ordering(146) 00:15:08.212 fused_ordering(147) 00:15:08.212 fused_ordering(148) 00:15:08.212 fused_ordering(149) 00:15:08.212 fused_ordering(150) 00:15:08.212 fused_ordering(151) 00:15:08.212 fused_ordering(152) 00:15:08.212 fused_ordering(153) 00:15:08.212 fused_ordering(154) 00:15:08.212 fused_ordering(155) 00:15:08.212 fused_ordering(156) 00:15:08.212 fused_ordering(157) 00:15:08.212 fused_ordering(158) 00:15:08.212 fused_ordering(159) 00:15:08.212 fused_ordering(160) 00:15:08.212 fused_ordering(161) 00:15:08.212 fused_ordering(162) 00:15:08.212 fused_ordering(163) 00:15:08.212 fused_ordering(164) 00:15:08.212 fused_ordering(165) 00:15:08.212 fused_ordering(166) 00:15:08.212 fused_ordering(167) 00:15:08.212 fused_ordering(168) 00:15:08.212 fused_ordering(169) 00:15:08.212 fused_ordering(170) 00:15:08.212 fused_ordering(171) 00:15:08.212 fused_ordering(172) 00:15:08.212 fused_ordering(173) 00:15:08.212 fused_ordering(174) 00:15:08.212 fused_ordering(175) 00:15:08.212 fused_ordering(176) 00:15:08.212 fused_ordering(177) 00:15:08.212 fused_ordering(178) 00:15:08.212 fused_ordering(179) 00:15:08.212 fused_ordering(180) 00:15:08.212 fused_ordering(181) 00:15:08.212 fused_ordering(182) 00:15:08.212 fused_ordering(183) 00:15:08.212 fused_ordering(184) 00:15:08.212 fused_ordering(185) 00:15:08.212 fused_ordering(186) 00:15:08.212 fused_ordering(187) 00:15:08.212 fused_ordering(188) 00:15:08.212 fused_ordering(189) 00:15:08.212 fused_ordering(190) 00:15:08.212 fused_ordering(191) 00:15:08.212 fused_ordering(192) 00:15:08.212 fused_ordering(193) 00:15:08.212 fused_ordering(194) 00:15:08.212 fused_ordering(195) 00:15:08.212 fused_ordering(196) 00:15:08.212 fused_ordering(197) 00:15:08.212 fused_ordering(198) 00:15:08.212 fused_ordering(199) 00:15:08.212 fused_ordering(200) 00:15:08.212 fused_ordering(201) 00:15:08.212 fused_ordering(202) 00:15:08.212 fused_ordering(203) 00:15:08.212 fused_ordering(204) 00:15:08.212 fused_ordering(205) 00:15:08.473 fused_ordering(206) 00:15:08.473 fused_ordering(207) 00:15:08.473 fused_ordering(208) 00:15:08.473 fused_ordering(209) 00:15:08.473 fused_ordering(210) 00:15:08.473 fused_ordering(211) 00:15:08.473 fused_ordering(212) 00:15:08.473 fused_ordering(213) 00:15:08.473 fused_ordering(214) 00:15:08.473 fused_ordering(215) 00:15:08.473 fused_ordering(216) 00:15:08.473 fused_ordering(217) 00:15:08.473 fused_ordering(218) 00:15:08.473 fused_ordering(219) 00:15:08.473 fused_ordering(220) 00:15:08.473 fused_ordering(221) 00:15:08.473 fused_ordering(222) 00:15:08.473 fused_ordering(223) 00:15:08.473 fused_ordering(224) 00:15:08.473 fused_ordering(225) 00:15:08.473 fused_ordering(226) 00:15:08.473 fused_ordering(227) 00:15:08.473 fused_ordering(228) 00:15:08.473 fused_ordering(229) 00:15:08.473 fused_ordering(230) 00:15:08.473 fused_ordering(231) 00:15:08.473 fused_ordering(232) 00:15:08.473 fused_ordering(233) 00:15:08.473 fused_ordering(234) 00:15:08.473 fused_ordering(235) 00:15:08.473 fused_ordering(236) 00:15:08.473 fused_ordering(237) 00:15:08.473 fused_ordering(238) 00:15:08.473 fused_ordering(239) 00:15:08.473 fused_ordering(240) 00:15:08.473 fused_ordering(241) 00:15:08.473 fused_ordering(242) 00:15:08.473 fused_ordering(243) 00:15:08.473 fused_ordering(244) 00:15:08.473 fused_ordering(245) 00:15:08.473 fused_ordering(246) 00:15:08.473 fused_ordering(247) 00:15:08.473 fused_ordering(248) 00:15:08.473 fused_ordering(249) 00:15:08.473 fused_ordering(250) 00:15:08.473 fused_ordering(251) 00:15:08.473 fused_ordering(252) 00:15:08.473 fused_ordering(253) 00:15:08.473 fused_ordering(254) 00:15:08.473 fused_ordering(255) 00:15:08.473 fused_ordering(256) 00:15:08.473 fused_ordering(257) 00:15:08.473 fused_ordering(258) 00:15:08.473 fused_ordering(259) 00:15:08.473 fused_ordering(260) 00:15:08.473 fused_ordering(261) 00:15:08.473 fused_ordering(262) 00:15:08.473 fused_ordering(263) 00:15:08.473 fused_ordering(264) 00:15:08.473 fused_ordering(265) 00:15:08.473 fused_ordering(266) 00:15:08.473 fused_ordering(267) 00:15:08.473 fused_ordering(268) 00:15:08.473 fused_ordering(269) 00:15:08.473 fused_ordering(270) 00:15:08.473 fused_ordering(271) 00:15:08.473 fused_ordering(272) 00:15:08.473 fused_ordering(273) 00:15:08.473 fused_ordering(274) 00:15:08.473 fused_ordering(275) 00:15:08.473 fused_ordering(276) 00:15:08.473 fused_ordering(277) 00:15:08.473 fused_ordering(278) 00:15:08.473 fused_ordering(279) 00:15:08.473 fused_ordering(280) 00:15:08.473 fused_ordering(281) 00:15:08.473 fused_ordering(282) 00:15:08.473 fused_ordering(283) 00:15:08.473 fused_ordering(284) 00:15:08.473 fused_ordering(285) 00:15:08.473 fused_ordering(286) 00:15:08.473 fused_ordering(287) 00:15:08.473 fused_ordering(288) 00:15:08.473 fused_ordering(289) 00:15:08.473 fused_ordering(290) 00:15:08.473 fused_ordering(291) 00:15:08.473 fused_ordering(292) 00:15:08.473 fused_ordering(293) 00:15:08.473 fused_ordering(294) 00:15:08.473 fused_ordering(295) 00:15:08.473 fused_ordering(296) 00:15:08.473 fused_ordering(297) 00:15:08.473 fused_ordering(298) 00:15:08.473 fused_ordering(299) 00:15:08.473 fused_ordering(300) 00:15:08.473 fused_ordering(301) 00:15:08.473 fused_ordering(302) 00:15:08.473 fused_ordering(303) 00:15:08.473 fused_ordering(304) 00:15:08.473 fused_ordering(305) 00:15:08.473 fused_ordering(306) 00:15:08.473 fused_ordering(307) 00:15:08.473 fused_ordering(308) 00:15:08.473 fused_ordering(309) 00:15:08.473 fused_ordering(310) 00:15:08.473 fused_ordering(311) 00:15:08.473 fused_ordering(312) 00:15:08.473 fused_ordering(313) 00:15:08.473 fused_ordering(314) 00:15:08.473 fused_ordering(315) 00:15:08.473 fused_ordering(316) 00:15:08.473 fused_ordering(317) 00:15:08.473 fused_ordering(318) 00:15:08.473 fused_ordering(319) 00:15:08.473 fused_ordering(320) 00:15:08.473 fused_ordering(321) 00:15:08.473 fused_ordering(322) 00:15:08.473 fused_ordering(323) 00:15:08.473 fused_ordering(324) 00:15:08.473 fused_ordering(325) 00:15:08.473 fused_ordering(326) 00:15:08.473 fused_ordering(327) 00:15:08.473 fused_ordering(328) 00:15:08.473 fused_ordering(329) 00:15:08.473 fused_ordering(330) 00:15:08.474 fused_ordering(331) 00:15:08.474 fused_ordering(332) 00:15:08.474 fused_ordering(333) 00:15:08.474 fused_ordering(334) 00:15:08.474 fused_ordering(335) 00:15:08.474 fused_ordering(336) 00:15:08.474 fused_ordering(337) 00:15:08.474 fused_ordering(338) 00:15:08.474 fused_ordering(339) 00:15:08.474 fused_ordering(340) 00:15:08.474 fused_ordering(341) 00:15:08.474 fused_ordering(342) 00:15:08.474 fused_ordering(343) 00:15:08.474 fused_ordering(344) 00:15:08.474 fused_ordering(345) 00:15:08.474 fused_ordering(346) 00:15:08.474 fused_ordering(347) 00:15:08.474 fused_ordering(348) 00:15:08.474 fused_ordering(349) 00:15:08.474 fused_ordering(350) 00:15:08.474 fused_ordering(351) 00:15:08.474 fused_ordering(352) 00:15:08.474 fused_ordering(353) 00:15:08.474 fused_ordering(354) 00:15:08.474 fused_ordering(355) 00:15:08.474 fused_ordering(356) 00:15:08.474 fused_ordering(357) 00:15:08.474 fused_ordering(358) 00:15:08.474 fused_ordering(359) 00:15:08.474 fused_ordering(360) 00:15:08.474 fused_ordering(361) 00:15:08.474 fused_ordering(362) 00:15:08.474 fused_ordering(363) 00:15:08.474 fused_ordering(364) 00:15:08.474 fused_ordering(365) 00:15:08.474 fused_ordering(366) 00:15:08.474 fused_ordering(367) 00:15:08.474 fused_ordering(368) 00:15:08.474 fused_ordering(369) 00:15:08.474 fused_ordering(370) 00:15:08.474 fused_ordering(371) 00:15:08.474 fused_ordering(372) 00:15:08.474 fused_ordering(373) 00:15:08.474 fused_ordering(374) 00:15:08.474 fused_ordering(375) 00:15:08.474 fused_ordering(376) 00:15:08.474 fused_ordering(377) 00:15:08.474 fused_ordering(378) 00:15:08.474 fused_ordering(379) 00:15:08.474 fused_ordering(380) 00:15:08.474 fused_ordering(381) 00:15:08.474 fused_ordering(382) 00:15:08.474 fused_ordering(383) 00:15:08.474 fused_ordering(384) 00:15:08.474 fused_ordering(385) 00:15:08.474 fused_ordering(386) 00:15:08.474 fused_ordering(387) 00:15:08.474 fused_ordering(388) 00:15:08.474 fused_ordering(389) 00:15:08.474 fused_ordering(390) 00:15:08.474 fused_ordering(391) 00:15:08.474 fused_ordering(392) 00:15:08.474 fused_ordering(393) 00:15:08.474 fused_ordering(394) 00:15:08.474 fused_ordering(395) 00:15:08.474 fused_ordering(396) 00:15:08.474 fused_ordering(397) 00:15:08.474 fused_ordering(398) 00:15:08.474 fused_ordering(399) 00:15:08.474 fused_ordering(400) 00:15:08.474 fused_ordering(401) 00:15:08.474 fused_ordering(402) 00:15:08.474 fused_ordering(403) 00:15:08.474 fused_ordering(404) 00:15:08.474 fused_ordering(405) 00:15:08.474 fused_ordering(406) 00:15:08.474 fused_ordering(407) 00:15:08.474 fused_ordering(408) 00:15:08.474 fused_ordering(409) 00:15:08.474 fused_ordering(410) 00:15:09.045 fused_ordering(411) 00:15:09.045 fused_ordering(412) 00:15:09.045 fused_ordering(413) 00:15:09.045 fused_ordering(414) 00:15:09.045 fused_ordering(415) 00:15:09.045 fused_ordering(416) 00:15:09.045 fused_ordering(417) 00:15:09.045 fused_ordering(418) 00:15:09.045 fused_ordering(419) 00:15:09.045 fused_ordering(420) 00:15:09.045 fused_ordering(421) 00:15:09.045 fused_ordering(422) 00:15:09.045 fused_ordering(423) 00:15:09.045 fused_ordering(424) 00:15:09.045 fused_ordering(425) 00:15:09.045 fused_ordering(426) 00:15:09.045 fused_ordering(427) 00:15:09.045 fused_ordering(428) 00:15:09.045 fused_ordering(429) 00:15:09.045 fused_ordering(430) 00:15:09.045 fused_ordering(431) 00:15:09.045 fused_ordering(432) 00:15:09.045 fused_ordering(433) 00:15:09.045 fused_ordering(434) 00:15:09.045 fused_ordering(435) 00:15:09.045 fused_ordering(436) 00:15:09.045 fused_ordering(437) 00:15:09.045 fused_ordering(438) 00:15:09.045 fused_ordering(439) 00:15:09.045 fused_ordering(440) 00:15:09.045 fused_ordering(441) 00:15:09.045 fused_ordering(442) 00:15:09.045 fused_ordering(443) 00:15:09.045 fused_ordering(444) 00:15:09.045 fused_ordering(445) 00:15:09.045 fused_ordering(446) 00:15:09.045 fused_ordering(447) 00:15:09.045 fused_ordering(448) 00:15:09.045 fused_ordering(449) 00:15:09.045 fused_ordering(450) 00:15:09.045 fused_ordering(451) 00:15:09.045 fused_ordering(452) 00:15:09.045 fused_ordering(453) 00:15:09.045 fused_ordering(454) 00:15:09.045 fused_ordering(455) 00:15:09.045 fused_ordering(456) 00:15:09.045 fused_ordering(457) 00:15:09.045 fused_ordering(458) 00:15:09.045 fused_ordering(459) 00:15:09.045 fused_ordering(460) 00:15:09.045 fused_ordering(461) 00:15:09.045 fused_ordering(462) 00:15:09.045 fused_ordering(463) 00:15:09.045 fused_ordering(464) 00:15:09.046 fused_ordering(465) 00:15:09.046 fused_ordering(466) 00:15:09.046 fused_ordering(467) 00:15:09.046 fused_ordering(468) 00:15:09.046 fused_ordering(469) 00:15:09.046 fused_ordering(470) 00:15:09.046 fused_ordering(471) 00:15:09.046 fused_ordering(472) 00:15:09.046 fused_ordering(473) 00:15:09.046 fused_ordering(474) 00:15:09.046 fused_ordering(475) 00:15:09.046 fused_ordering(476) 00:15:09.046 fused_ordering(477) 00:15:09.046 fused_ordering(478) 00:15:09.046 fused_ordering(479) 00:15:09.046 fused_ordering(480) 00:15:09.046 fused_ordering(481) 00:15:09.046 fused_ordering(482) 00:15:09.046 fused_ordering(483) 00:15:09.046 fused_ordering(484) 00:15:09.046 fused_ordering(485) 00:15:09.046 fused_ordering(486) 00:15:09.046 fused_ordering(487) 00:15:09.046 fused_ordering(488) 00:15:09.046 fused_ordering(489) 00:15:09.046 fused_ordering(490) 00:15:09.046 fused_ordering(491) 00:15:09.046 fused_ordering(492) 00:15:09.046 fused_ordering(493) 00:15:09.046 fused_ordering(494) 00:15:09.046 fused_ordering(495) 00:15:09.046 fused_ordering(496) 00:15:09.046 fused_ordering(497) 00:15:09.046 fused_ordering(498) 00:15:09.046 fused_ordering(499) 00:15:09.046 fused_ordering(500) 00:15:09.046 fused_ordering(501) 00:15:09.046 fused_ordering(502) 00:15:09.046 fused_ordering(503) 00:15:09.046 fused_ordering(504) 00:15:09.046 fused_ordering(505) 00:15:09.046 fused_ordering(506) 00:15:09.046 fused_ordering(507) 00:15:09.046 fused_ordering(508) 00:15:09.046 fused_ordering(509) 00:15:09.046 fused_ordering(510) 00:15:09.046 fused_ordering(511) 00:15:09.046 fused_ordering(512) 00:15:09.046 fused_ordering(513) 00:15:09.046 fused_ordering(514) 00:15:09.046 fused_ordering(515) 00:15:09.046 fused_ordering(516) 00:15:09.046 fused_ordering(517) 00:15:09.046 fused_ordering(518) 00:15:09.046 fused_ordering(519) 00:15:09.046 fused_ordering(520) 00:15:09.046 fused_ordering(521) 00:15:09.046 fused_ordering(522) 00:15:09.046 fused_ordering(523) 00:15:09.046 fused_ordering(524) 00:15:09.046 fused_ordering(525) 00:15:09.046 fused_ordering(526) 00:15:09.046 fused_ordering(527) 00:15:09.046 fused_ordering(528) 00:15:09.046 fused_ordering(529) 00:15:09.046 fused_ordering(530) 00:15:09.046 fused_ordering(531) 00:15:09.046 fused_ordering(532) 00:15:09.046 fused_ordering(533) 00:15:09.046 fused_ordering(534) 00:15:09.046 fused_ordering(535) 00:15:09.046 fused_ordering(536) 00:15:09.046 fused_ordering(537) 00:15:09.046 fused_ordering(538) 00:15:09.046 fused_ordering(539) 00:15:09.046 fused_ordering(540) 00:15:09.046 fused_ordering(541) 00:15:09.046 fused_ordering(542) 00:15:09.046 fused_ordering(543) 00:15:09.046 fused_ordering(544) 00:15:09.046 fused_ordering(545) 00:15:09.046 fused_ordering(546) 00:15:09.046 fused_ordering(547) 00:15:09.046 fused_ordering(548) 00:15:09.046 fused_ordering(549) 00:15:09.046 fused_ordering(550) 00:15:09.046 fused_ordering(551) 00:15:09.046 fused_ordering(552) 00:15:09.046 fused_ordering(553) 00:15:09.046 fused_ordering(554) 00:15:09.046 fused_ordering(555) 00:15:09.046 fused_ordering(556) 00:15:09.046 fused_ordering(557) 00:15:09.046 fused_ordering(558) 00:15:09.046 fused_ordering(559) 00:15:09.046 fused_ordering(560) 00:15:09.046 fused_ordering(561) 00:15:09.046 fused_ordering(562) 00:15:09.046 fused_ordering(563) 00:15:09.046 fused_ordering(564) 00:15:09.046 fused_ordering(565) 00:15:09.046 fused_ordering(566) 00:15:09.046 fused_ordering(567) 00:15:09.046 fused_ordering(568) 00:15:09.046 fused_ordering(569) 00:15:09.046 fused_ordering(570) 00:15:09.046 fused_ordering(571) 00:15:09.046 fused_ordering(572) 00:15:09.046 fused_ordering(573) 00:15:09.046 fused_ordering(574) 00:15:09.046 fused_ordering(575) 00:15:09.046 fused_ordering(576) 00:15:09.046 fused_ordering(577) 00:15:09.046 fused_ordering(578) 00:15:09.046 fused_ordering(579) 00:15:09.046 fused_ordering(580) 00:15:09.046 fused_ordering(581) 00:15:09.046 fused_ordering(582) 00:15:09.046 fused_ordering(583) 00:15:09.046 fused_ordering(584) 00:15:09.046 fused_ordering(585) 00:15:09.046 fused_ordering(586) 00:15:09.046 fused_ordering(587) 00:15:09.046 fused_ordering(588) 00:15:09.046 fused_ordering(589) 00:15:09.046 fused_ordering(590) 00:15:09.046 fused_ordering(591) 00:15:09.046 fused_ordering(592) 00:15:09.046 fused_ordering(593) 00:15:09.046 fused_ordering(594) 00:15:09.046 fused_ordering(595) 00:15:09.046 fused_ordering(596) 00:15:09.046 fused_ordering(597) 00:15:09.046 fused_ordering(598) 00:15:09.046 fused_ordering(599) 00:15:09.046 fused_ordering(600) 00:15:09.046 fused_ordering(601) 00:15:09.046 fused_ordering(602) 00:15:09.046 fused_ordering(603) 00:15:09.046 fused_ordering(604) 00:15:09.046 fused_ordering(605) 00:15:09.046 fused_ordering(606) 00:15:09.046 fused_ordering(607) 00:15:09.046 fused_ordering(608) 00:15:09.046 fused_ordering(609) 00:15:09.046 fused_ordering(610) 00:15:09.046 fused_ordering(611) 00:15:09.046 fused_ordering(612) 00:15:09.046 fused_ordering(613) 00:15:09.046 fused_ordering(614) 00:15:09.046 fused_ordering(615) 00:15:09.619 fused_ordering(616) 00:15:09.619 fused_ordering(617) 00:15:09.619 fused_ordering(618) 00:15:09.619 fused_ordering(619) 00:15:09.619 fused_ordering(620) 00:15:09.619 fused_ordering(621) 00:15:09.619 fused_ordering(622) 00:15:09.619 fused_ordering(623) 00:15:09.619 fused_ordering(624) 00:15:09.619 fused_ordering(625) 00:15:09.619 fused_ordering(626) 00:15:09.619 fused_ordering(627) 00:15:09.619 fused_ordering(628) 00:15:09.619 fused_ordering(629) 00:15:09.619 fused_ordering(630) 00:15:09.619 fused_ordering(631) 00:15:09.619 fused_ordering(632) 00:15:09.619 fused_ordering(633) 00:15:09.619 fused_ordering(634) 00:15:09.619 fused_ordering(635) 00:15:09.619 fused_ordering(636) 00:15:09.619 fused_ordering(637) 00:15:09.619 fused_ordering(638) 00:15:09.619 fused_ordering(639) 00:15:09.619 fused_ordering(640) 00:15:09.619 fused_ordering(641) 00:15:09.619 fused_ordering(642) 00:15:09.619 fused_ordering(643) 00:15:09.619 fused_ordering(644) 00:15:09.619 fused_ordering(645) 00:15:09.619 fused_ordering(646) 00:15:09.619 fused_ordering(647) 00:15:09.619 fused_ordering(648) 00:15:09.619 fused_ordering(649) 00:15:09.619 fused_ordering(650) 00:15:09.619 fused_ordering(651) 00:15:09.619 fused_ordering(652) 00:15:09.619 fused_ordering(653) 00:15:09.619 fused_ordering(654) 00:15:09.619 fused_ordering(655) 00:15:09.619 fused_ordering(656) 00:15:09.619 fused_ordering(657) 00:15:09.619 fused_ordering(658) 00:15:09.619 fused_ordering(659) 00:15:09.619 fused_ordering(660) 00:15:09.619 fused_ordering(661) 00:15:09.619 fused_ordering(662) 00:15:09.619 fused_ordering(663) 00:15:09.619 fused_ordering(664) 00:15:09.619 fused_ordering(665) 00:15:09.619 fused_ordering(666) 00:15:09.619 fused_ordering(667) 00:15:09.619 fused_ordering(668) 00:15:09.619 fused_ordering(669) 00:15:09.619 fused_ordering(670) 00:15:09.619 fused_ordering(671) 00:15:09.619 fused_ordering(672) 00:15:09.619 fused_ordering(673) 00:15:09.619 fused_ordering(674) 00:15:09.619 fused_ordering(675) 00:15:09.619 fused_ordering(676) 00:15:09.619 fused_ordering(677) 00:15:09.619 fused_ordering(678) 00:15:09.619 fused_ordering(679) 00:15:09.619 fused_ordering(680) 00:15:09.619 fused_ordering(681) 00:15:09.619 fused_ordering(682) 00:15:09.619 fused_ordering(683) 00:15:09.619 fused_ordering(684) 00:15:09.619 fused_ordering(685) 00:15:09.619 fused_ordering(686) 00:15:09.619 fused_ordering(687) 00:15:09.619 fused_ordering(688) 00:15:09.619 fused_ordering(689) 00:15:09.619 fused_ordering(690) 00:15:09.619 fused_ordering(691) 00:15:09.619 fused_ordering(692) 00:15:09.619 fused_ordering(693) 00:15:09.619 fused_ordering(694) 00:15:09.619 fused_ordering(695) 00:15:09.619 fused_ordering(696) 00:15:09.619 fused_ordering(697) 00:15:09.619 fused_ordering(698) 00:15:09.619 fused_ordering(699) 00:15:09.619 fused_ordering(700) 00:15:09.619 fused_ordering(701) 00:15:09.619 fused_ordering(702) 00:15:09.619 fused_ordering(703) 00:15:09.619 fused_ordering(704) 00:15:09.619 fused_ordering(705) 00:15:09.619 fused_ordering(706) 00:15:09.619 fused_ordering(707) 00:15:09.619 fused_ordering(708) 00:15:09.619 fused_ordering(709) 00:15:09.619 fused_ordering(710) 00:15:09.619 fused_ordering(711) 00:15:09.619 fused_ordering(712) 00:15:09.619 fused_ordering(713) 00:15:09.619 fused_ordering(714) 00:15:09.619 fused_ordering(715) 00:15:09.619 fused_ordering(716) 00:15:09.619 fused_ordering(717) 00:15:09.619 fused_ordering(718) 00:15:09.619 fused_ordering(719) 00:15:09.619 fused_ordering(720) 00:15:09.619 fused_ordering(721) 00:15:09.619 fused_ordering(722) 00:15:09.619 fused_ordering(723) 00:15:09.619 fused_ordering(724) 00:15:09.619 fused_ordering(725) 00:15:09.619 fused_ordering(726) 00:15:09.619 fused_ordering(727) 00:15:09.619 fused_ordering(728) 00:15:09.619 fused_ordering(729) 00:15:09.619 fused_ordering(730) 00:15:09.619 fused_ordering(731) 00:15:09.619 fused_ordering(732) 00:15:09.619 fused_ordering(733) 00:15:09.619 fused_ordering(734) 00:15:09.619 fused_ordering(735) 00:15:09.619 fused_ordering(736) 00:15:09.619 fused_ordering(737) 00:15:09.619 fused_ordering(738) 00:15:09.619 fused_ordering(739) 00:15:09.619 fused_ordering(740) 00:15:09.619 fused_ordering(741) 00:15:09.619 fused_ordering(742) 00:15:09.619 fused_ordering(743) 00:15:09.619 fused_ordering(744) 00:15:09.619 fused_ordering(745) 00:15:09.619 fused_ordering(746) 00:15:09.619 fused_ordering(747) 00:15:09.619 fused_ordering(748) 00:15:09.619 fused_ordering(749) 00:15:09.619 fused_ordering(750) 00:15:09.619 fused_ordering(751) 00:15:09.619 fused_ordering(752) 00:15:09.619 fused_ordering(753) 00:15:09.619 fused_ordering(754) 00:15:09.619 fused_ordering(755) 00:15:09.619 fused_ordering(756) 00:15:09.619 fused_ordering(757) 00:15:09.619 fused_ordering(758) 00:15:09.619 fused_ordering(759) 00:15:09.619 fused_ordering(760) 00:15:09.619 fused_ordering(761) 00:15:09.619 fused_ordering(762) 00:15:09.619 fused_ordering(763) 00:15:09.619 fused_ordering(764) 00:15:09.619 fused_ordering(765) 00:15:09.619 fused_ordering(766) 00:15:09.620 fused_ordering(767) 00:15:09.620 fused_ordering(768) 00:15:09.620 fused_ordering(769) 00:15:09.620 fused_ordering(770) 00:15:09.620 fused_ordering(771) 00:15:09.620 fused_ordering(772) 00:15:09.620 fused_ordering(773) 00:15:09.620 fused_ordering(774) 00:15:09.620 fused_ordering(775) 00:15:09.620 fused_ordering(776) 00:15:09.620 fused_ordering(777) 00:15:09.620 fused_ordering(778) 00:15:09.620 fused_ordering(779) 00:15:09.620 fused_ordering(780) 00:15:09.620 fused_ordering(781) 00:15:09.620 fused_ordering(782) 00:15:09.620 fused_ordering(783) 00:15:09.620 fused_ordering(784) 00:15:09.620 fused_ordering(785) 00:15:09.620 fused_ordering(786) 00:15:09.620 fused_ordering(787) 00:15:09.620 fused_ordering(788) 00:15:09.620 fused_ordering(789) 00:15:09.620 fused_ordering(790) 00:15:09.620 fused_ordering(791) 00:15:09.620 fused_ordering(792) 00:15:09.620 fused_ordering(793) 00:15:09.620 fused_ordering(794) 00:15:09.620 fused_ordering(795) 00:15:09.620 fused_ordering(796) 00:15:09.620 fused_ordering(797) 00:15:09.620 fused_ordering(798) 00:15:09.620 fused_ordering(799) 00:15:09.620 fused_ordering(800) 00:15:09.620 fused_ordering(801) 00:15:09.620 fused_ordering(802) 00:15:09.620 fused_ordering(803) 00:15:09.620 fused_ordering(804) 00:15:09.620 fused_ordering(805) 00:15:09.620 fused_ordering(806) 00:15:09.620 fused_ordering(807) 00:15:09.620 fused_ordering(808) 00:15:09.620 fused_ordering(809) 00:15:09.620 fused_ordering(810) 00:15:09.620 fused_ordering(811) 00:15:09.620 fused_ordering(812) 00:15:09.620 fused_ordering(813) 00:15:09.620 fused_ordering(814) 00:15:09.620 fused_ordering(815) 00:15:09.620 fused_ordering(816) 00:15:09.620 fused_ordering(817) 00:15:09.620 fused_ordering(818) 00:15:09.620 fused_ordering(819) 00:15:09.620 fused_ordering(820) 00:15:10.192 fused_ordering(821) 00:15:10.192 fused_ordering(822) 00:15:10.192 fused_ordering(823) 00:15:10.192 fused_ordering(824) 00:15:10.192 fused_ordering(825) 00:15:10.192 fused_ordering(826) 00:15:10.192 fused_ordering(827) 00:15:10.192 fused_ordering(828) 00:15:10.192 fused_ordering(829) 00:15:10.192 fused_ordering(830) 00:15:10.192 fused_ordering(831) 00:15:10.192 fused_ordering(832) 00:15:10.192 fused_ordering(833) 00:15:10.192 fused_ordering(834) 00:15:10.192 fused_ordering(835) 00:15:10.192 fused_ordering(836) 00:15:10.192 fused_ordering(837) 00:15:10.192 fused_ordering(838) 00:15:10.192 fused_ordering(839) 00:15:10.192 fused_ordering(840) 00:15:10.192 fused_ordering(841) 00:15:10.192 fused_ordering(842) 00:15:10.192 fused_ordering(843) 00:15:10.192 fused_ordering(844) 00:15:10.192 fused_ordering(845) 00:15:10.192 fused_ordering(846) 00:15:10.192 fused_ordering(847) 00:15:10.192 fused_ordering(848) 00:15:10.192 fused_ordering(849) 00:15:10.192 fused_ordering(850) 00:15:10.192 fused_ordering(851) 00:15:10.192 fused_ordering(852) 00:15:10.192 fused_ordering(853) 00:15:10.192 fused_ordering(854) 00:15:10.192 fused_ordering(855) 00:15:10.192 fused_ordering(856) 00:15:10.192 fused_ordering(857) 00:15:10.192 fused_ordering(858) 00:15:10.192 fused_ordering(859) 00:15:10.192 fused_ordering(860) 00:15:10.192 fused_ordering(861) 00:15:10.192 fused_ordering(862) 00:15:10.192 fused_ordering(863) 00:15:10.192 fused_ordering(864) 00:15:10.192 fused_ordering(865) 00:15:10.192 fused_ordering(866) 00:15:10.192 fused_ordering(867) 00:15:10.192 fused_ordering(868) 00:15:10.192 fused_ordering(869) 00:15:10.192 fused_ordering(870) 00:15:10.192 fused_ordering(871) 00:15:10.192 fused_ordering(872) 00:15:10.192 fused_ordering(873) 00:15:10.192 fused_ordering(874) 00:15:10.192 fused_ordering(875) 00:15:10.192 fused_ordering(876) 00:15:10.192 fused_ordering(877) 00:15:10.192 fused_ordering(878) 00:15:10.192 fused_ordering(879) 00:15:10.192 fused_ordering(880) 00:15:10.192 fused_ordering(881) 00:15:10.192 fused_ordering(882) 00:15:10.192 fused_ordering(883) 00:15:10.192 fused_ordering(884) 00:15:10.192 fused_ordering(885) 00:15:10.192 fused_ordering(886) 00:15:10.192 fused_ordering(887) 00:15:10.192 fused_ordering(888) 00:15:10.192 fused_ordering(889) 00:15:10.192 fused_ordering(890) 00:15:10.192 fused_ordering(891) 00:15:10.192 fused_ordering(892) 00:15:10.192 fused_ordering(893) 00:15:10.192 fused_ordering(894) 00:15:10.192 fused_ordering(895) 00:15:10.192 fused_ordering(896) 00:15:10.192 fused_ordering(897) 00:15:10.192 fused_ordering(898) 00:15:10.192 fused_ordering(899) 00:15:10.192 fused_ordering(900) 00:15:10.192 fused_ordering(901) 00:15:10.192 fused_ordering(902) 00:15:10.192 fused_ordering(903) 00:15:10.192 fused_ordering(904) 00:15:10.192 fused_ordering(905) 00:15:10.192 fused_ordering(906) 00:15:10.193 fused_ordering(907) 00:15:10.193 fused_ordering(908) 00:15:10.193 fused_ordering(909) 00:15:10.193 fused_ordering(910) 00:15:10.193 fused_ordering(911) 00:15:10.193 fused_ordering(912) 00:15:10.193 fused_ordering(913) 00:15:10.193 fused_ordering(914) 00:15:10.193 fused_ordering(915) 00:15:10.193 fused_ordering(916) 00:15:10.193 fused_ordering(917) 00:15:10.193 fused_ordering(918) 00:15:10.193 fused_ordering(919) 00:15:10.193 fused_ordering(920) 00:15:10.193 fused_ordering(921) 00:15:10.193 fused_ordering(922) 00:15:10.193 fused_ordering(923) 00:15:10.193 fused_ordering(924) 00:15:10.193 fused_ordering(925) 00:15:10.193 fused_ordering(926) 00:15:10.193 fused_ordering(927) 00:15:10.193 fused_ordering(928) 00:15:10.193 fused_ordering(929) 00:15:10.193 fused_ordering(930) 00:15:10.193 fused_ordering(931) 00:15:10.193 fused_ordering(932) 00:15:10.193 fused_ordering(933) 00:15:10.193 fused_ordering(934) 00:15:10.193 fused_ordering(935) 00:15:10.193 fused_ordering(936) 00:15:10.193 fused_ordering(937) 00:15:10.193 fused_ordering(938) 00:15:10.193 fused_ordering(939) 00:15:10.193 fused_ordering(940) 00:15:10.193 fused_ordering(941) 00:15:10.193 fused_ordering(942) 00:15:10.193 fused_ordering(943) 00:15:10.193 fused_ordering(944) 00:15:10.193 fused_ordering(945) 00:15:10.193 fused_ordering(946) 00:15:10.193 fused_ordering(947) 00:15:10.193 fused_ordering(948) 00:15:10.193 fused_ordering(949) 00:15:10.193 fused_ordering(950) 00:15:10.193 fused_ordering(951) 00:15:10.193 fused_ordering(952) 00:15:10.193 fused_ordering(953) 00:15:10.193 fused_ordering(954) 00:15:10.193 fused_ordering(955) 00:15:10.193 fused_ordering(956) 00:15:10.193 fused_ordering(957) 00:15:10.193 fused_ordering(958) 00:15:10.193 fused_ordering(959) 00:15:10.193 fused_ordering(960) 00:15:10.193 fused_ordering(961) 00:15:10.193 fused_ordering(962) 00:15:10.193 fused_ordering(963) 00:15:10.193 fused_ordering(964) 00:15:10.193 fused_ordering(965) 00:15:10.193 fused_ordering(966) 00:15:10.193 fused_ordering(967) 00:15:10.193 fused_ordering(968) 00:15:10.193 fused_ordering(969) 00:15:10.193 fused_ordering(970) 00:15:10.193 fused_ordering(971) 00:15:10.193 fused_ordering(972) 00:15:10.193 fused_ordering(973) 00:15:10.193 fused_ordering(974) 00:15:10.193 fused_ordering(975) 00:15:10.193 fused_ordering(976) 00:15:10.193 fused_ordering(977) 00:15:10.193 fused_ordering(978) 00:15:10.193 fused_ordering(979) 00:15:10.193 fused_ordering(980) 00:15:10.193 fused_ordering(981) 00:15:10.193 fused_ordering(982) 00:15:10.193 fused_ordering(983) 00:15:10.193 fused_ordering(984) 00:15:10.193 fused_ordering(985) 00:15:10.193 fused_ordering(986) 00:15:10.193 fused_ordering(987) 00:15:10.193 fused_ordering(988) 00:15:10.193 fused_ordering(989) 00:15:10.193 fused_ordering(990) 00:15:10.193 fused_ordering(991) 00:15:10.193 fused_ordering(992) 00:15:10.193 fused_ordering(993) 00:15:10.193 fused_ordering(994) 00:15:10.193 fused_ordering(995) 00:15:10.193 fused_ordering(996) 00:15:10.193 fused_ordering(997) 00:15:10.193 fused_ordering(998) 00:15:10.193 fused_ordering(999) 00:15:10.193 fused_ordering(1000) 00:15:10.193 fused_ordering(1001) 00:15:10.193 fused_ordering(1002) 00:15:10.193 fused_ordering(1003) 00:15:10.193 fused_ordering(1004) 00:15:10.193 fused_ordering(1005) 00:15:10.193 fused_ordering(1006) 00:15:10.193 fused_ordering(1007) 00:15:10.193 fused_ordering(1008) 00:15:10.193 fused_ordering(1009) 00:15:10.193 fused_ordering(1010) 00:15:10.193 fused_ordering(1011) 00:15:10.193 fused_ordering(1012) 00:15:10.193 fused_ordering(1013) 00:15:10.193 fused_ordering(1014) 00:15:10.193 fused_ordering(1015) 00:15:10.193 fused_ordering(1016) 00:15:10.193 fused_ordering(1017) 00:15:10.193 fused_ordering(1018) 00:15:10.193 fused_ordering(1019) 00:15:10.193 fused_ordering(1020) 00:15:10.193 fused_ordering(1021) 00:15:10.193 fused_ordering(1022) 00:15:10.193 fused_ordering(1023) 00:15:10.193 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:10.193 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:10.193 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:10.193 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:15:10.193 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:10.193 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:15:10.193 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:10.193 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:10.193 rmmod nvme_tcp 00:15:10.193 rmmod nvme_fabrics 00:15:10.193 rmmod nvme_keyring 00:15:10.193 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:10.193 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:15:10.193 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:15:10.193 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1906163 ']' 00:15:10.193 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1906163 00:15:10.193 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1906163 ']' 00:15:10.193 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1906163 00:15:10.193 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:15:10.193 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:10.193 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1906163 00:15:10.193 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:10.193 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:10.193 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1906163' 00:15:10.193 killing process with pid 1906163 00:15:10.193 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1906163 00:15:10.193 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1906163 00:15:10.454 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:10.454 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:10.454 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:10.454 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:15:10.454 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:15:10.454 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:10.454 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:15:10.454 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:10.454 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:10.454 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.454 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:10.454 08:14:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.371 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:12.371 00:15:12.371 real 0m13.605s 00:15:12.371 user 0m7.182s 00:15:12.371 sys 0m7.303s 00:15:12.371 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:12.371 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:12.371 ************************************ 00:15:12.371 END TEST nvmf_fused_ordering 00:15:12.371 ************************************ 00:15:12.633 08:14:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:12.633 08:14:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:12.633 08:14:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:12.633 08:14:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:12.633 ************************************ 00:15:12.633 START TEST nvmf_ns_masking 00:15:12.633 ************************************ 00:15:12.633 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:12.633 * Looking for test storage... 00:15:12.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:12.633 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:12.633 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:15:12.633 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:12.895 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:12.895 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:12.895 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:12.895 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:12.895 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:15:12.895 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:15:12.895 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:15:12.895 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:15:12.895 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:15:12.895 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:15:12.895 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:15:12.895 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:12.895 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:15:12.895 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:15:12.895 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:12.895 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:12.895 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:15:12.895 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:15:12.895 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:12.895 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:15:12.895 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:15:12.895 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:15:12.895 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:15:12.895 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:12.895 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:15:12.895 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:15:12.895 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:12.895 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:12.895 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:15:12.895 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:12.895 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:12.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.895 --rc genhtml_branch_coverage=1 00:15:12.895 --rc genhtml_function_coverage=1 00:15:12.895 --rc genhtml_legend=1 00:15:12.895 --rc geninfo_all_blocks=1 00:15:12.895 --rc geninfo_unexecuted_blocks=1 00:15:12.895 00:15:12.895 ' 00:15:12.895 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:12.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.895 --rc genhtml_branch_coverage=1 00:15:12.895 --rc genhtml_function_coverage=1 00:15:12.895 --rc genhtml_legend=1 00:15:12.895 --rc geninfo_all_blocks=1 00:15:12.895 --rc geninfo_unexecuted_blocks=1 00:15:12.895 00:15:12.895 ' 00:15:12.895 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:12.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.895 --rc genhtml_branch_coverage=1 00:15:12.895 --rc genhtml_function_coverage=1 00:15:12.895 --rc genhtml_legend=1 00:15:12.895 --rc geninfo_all_blocks=1 00:15:12.895 --rc geninfo_unexecuted_blocks=1 00:15:12.896 00:15:12.896 ' 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:12.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.896 --rc genhtml_branch_coverage=1 00:15:12.896 --rc genhtml_function_coverage=1 00:15:12.896 --rc genhtml_legend=1 00:15:12.896 --rc geninfo_all_blocks=1 00:15:12.896 --rc geninfo_unexecuted_blocks=1 00:15:12.896 00:15:12.896 ' 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:12.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=86d4e8dc-6034-4aa7-ad9c-d85455c9e450 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=3ac76199-5c22-45c7-aefa-89dbea609e20 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=62d10901-2433-4091-872c-424333757101 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:12.896 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.896 08:14:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:12.896 08:14:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:12.896 08:14:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:15:12.896 08:14:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:21.043 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:21.043 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:21.043 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:21.044 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:21.044 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:21.044 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:21.044 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:15:21.044 00:15:21.044 --- 10.0.0.2 ping statistics --- 00:15:21.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.044 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:21.044 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:21.044 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:15:21.044 00:15:21.044 --- 10.0.0.1 ping statistics --- 00:15:21.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.044 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1911032 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1911032 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1911032 ']' 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:21.044 08:14:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:21.044 [2024-11-28 08:14:17.596545] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:15:21.044 [2024-11-28 08:14:17.596616] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:21.044 [2024-11-28 08:14:17.696571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.044 [2024-11-28 08:14:17.747718] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:21.044 [2024-11-28 08:14:17.747770] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:21.044 [2024-11-28 08:14:17.747779] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:21.044 [2024-11-28 08:14:17.747786] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:21.044 [2024-11-28 08:14:17.747793] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:21.044 [2024-11-28 08:14:17.748594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.306 08:14:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:21.306 08:14:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:21.306 08:14:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:21.306 08:14:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:21.306 08:14:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:21.306 08:14:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:21.306 08:14:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:21.567 [2024-11-28 08:14:18.623823] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:21.567 08:14:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:21.567 08:14:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:21.567 08:14:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:21.567 Malloc1 00:15:21.829 08:14:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:21.829 Malloc2 00:15:21.829 08:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:22.090 08:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:22.351 08:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:22.351 [2024-11-28 08:14:19.638081] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:22.611 08:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:22.611 08:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 62d10901-2433-4091-872c-424333757101 -a 10.0.0.2 -s 4420 -i 4 00:15:22.611 08:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:22.611 08:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:22.611 08:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:22.611 08:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:22.611 08:14:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:25.164 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:25.164 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:25.164 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:25.164 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:25.164 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:25.164 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:25.164 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:25.164 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:25.164 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:25.164 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:25.164 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:25.164 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:25.164 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:25.164 [ 0]:0x1 00:15:25.164 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:25.164 08:14:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:25.164 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=987281f4dafb44a1bcacccadfec384bf 00:15:25.164 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 987281f4dafb44a1bcacccadfec384bf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:25.164 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:25.164 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:25.164 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:25.164 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:25.164 [ 0]:0x1 00:15:25.164 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:25.164 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:25.164 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=987281f4dafb44a1bcacccadfec384bf 00:15:25.164 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 987281f4dafb44a1bcacccadfec384bf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:25.164 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:25.164 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:25.164 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:25.164 [ 1]:0x2 00:15:25.164 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:25.164 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:25.164 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=323d2a8ad2774e43aa679e37e01eb59a 00:15:25.164 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 323d2a8ad2774e43aa679e37e01eb59a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:25.164 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:25.164 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:25.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.164 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:25.425 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:25.687 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:25.687 08:14:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 62d10901-2433-4091-872c-424333757101 -a 10.0.0.2 -s 4420 -i 4 00:15:25.948 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:25.948 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:25.948 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:25.948 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:15:25.948 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:15:25.948 08:14:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:27.859 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:27.859 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:27.859 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:27.859 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:27.859 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:27.859 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:27.859 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:27.859 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:27.859 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:27.859 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:27.859 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:27.859 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:27.859 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:27.859 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:27.859 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:27.859 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:27.859 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:27.859 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:27.859 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:27.859 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:27.859 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:27.859 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:27.859 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:27.859 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:27.859 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:27.859 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:27.859 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:27.859 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:27.859 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:27.859 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:27.859 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:28.120 [ 0]:0x2 00:15:28.120 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:28.120 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:28.120 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=323d2a8ad2774e43aa679e37e01eb59a 00:15:28.120 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 323d2a8ad2774e43aa679e37e01eb59a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:28.120 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:28.120 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:28.120 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:28.120 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:28.120 [ 0]:0x1 00:15:28.120 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:28.120 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:28.380 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=987281f4dafb44a1bcacccadfec384bf 00:15:28.380 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 987281f4dafb44a1bcacccadfec384bf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:28.380 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:28.380 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:28.380 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:28.380 [ 1]:0x2 00:15:28.380 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:28.380 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:28.380 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=323d2a8ad2774e43aa679e37e01eb59a 00:15:28.380 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 323d2a8ad2774e43aa679e37e01eb59a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:28.380 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:28.380 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:28.380 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:28.380 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:28.380 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:28.639 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:28.639 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:28.639 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:28.639 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:28.639 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:28.639 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:28.639 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:28.639 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:28.639 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:28.639 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:28.639 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:28.639 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:28.640 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:28.640 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:28.640 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:28.640 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:28.640 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:28.640 [ 0]:0x2 00:15:28.640 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:28.640 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:28.640 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=323d2a8ad2774e43aa679e37e01eb59a 00:15:28.640 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 323d2a8ad2774e43aa679e37e01eb59a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:28.640 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:28.640 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:28.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.640 08:14:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:28.901 08:14:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:28.901 08:14:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 62d10901-2433-4091-872c-424333757101 -a 10.0.0.2 -s 4420 -i 4 00:15:28.901 08:14:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:28.901 08:14:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:28.901 08:14:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:28.901 08:14:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:28.901 08:14:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:28.901 08:14:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:31.445 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:31.445 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:31.445 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:31.445 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:31.445 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:31.445 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:31.445 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:31.445 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:31.445 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:31.445 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:31.445 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:31.445 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:31.445 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:31.445 [ 0]:0x1 00:15:31.445 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:31.445 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:31.445 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=987281f4dafb44a1bcacccadfec384bf 00:15:31.446 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 987281f4dafb44a1bcacccadfec384bf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:31.446 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:31.446 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:31.446 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:31.446 [ 1]:0x2 00:15:31.446 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:31.446 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:31.446 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=323d2a8ad2774e43aa679e37e01eb59a 00:15:31.446 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 323d2a8ad2774e43aa679e37e01eb59a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:31.446 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:31.446 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:31.446 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:31.446 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:31.446 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:31.446 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:31.446 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:31.446 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:31.446 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:31.446 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:31.446 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:31.446 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:31.446 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:31.706 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:31.706 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:31.706 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:31.706 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:31.706 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:31.706 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:31.706 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:31.706 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:31.706 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:31.706 [ 0]:0x2 00:15:31.706 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:31.706 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:31.706 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=323d2a8ad2774e43aa679e37e01eb59a 00:15:31.706 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 323d2a8ad2774e43aa679e37e01eb59a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:31.706 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:31.706 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:31.707 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:31.707 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:31.707 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:31.707 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:31.707 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:31.707 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:31.707 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:31.707 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:31.707 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:31.707 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:31.707 [2024-11-28 08:14:28.991975] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:31.968 request: 00:15:31.968 { 00:15:31.968 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:31.968 "nsid": 2, 00:15:31.968 "host": "nqn.2016-06.io.spdk:host1", 00:15:31.968 "method": "nvmf_ns_remove_host", 00:15:31.968 "req_id": 1 00:15:31.968 } 00:15:31.968 Got JSON-RPC error response 00:15:31.968 response: 00:15:31.968 { 00:15:31.968 "code": -32602, 00:15:31.968 "message": "Invalid parameters" 00:15:31.968 } 00:15:31.968 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:31.968 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:31.968 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:31.968 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:31.968 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:31.968 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:31.968 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:31.968 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:31.969 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:31.969 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:31.969 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:31.969 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:31.969 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:31.969 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:31.969 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:31.969 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:31.969 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:31.969 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:31.969 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:31.969 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:31.969 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:31.969 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:31.969 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:31.969 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:31.969 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:31.969 [ 0]:0x2 00:15:31.969 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:31.969 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:31.969 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=323d2a8ad2774e43aa679e37e01eb59a 00:15:31.969 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 323d2a8ad2774e43aa679e37e01eb59a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:31.969 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:31.969 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:31.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.969 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1913376 00:15:31.969 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:31.969 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:31.969 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1913376 /var/tmp/host.sock 00:15:31.969 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1913376 ']' 00:15:31.969 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:31.969 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:31.969 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:31.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:31.969 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:31.969 08:14:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:32.231 [2024-11-28 08:14:29.263289] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:15:32.231 [2024-11-28 08:14:29.263340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1913376 ] 00:15:32.231 [2024-11-28 08:14:29.353769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.231 [2024-11-28 08:14:29.389573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.826 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:32.826 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:32.826 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:33.204 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:33.204 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 86d4e8dc-6034-4aa7-ad9c-d85455c9e450 00:15:33.204 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:33.204 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 86D4E8DC60344AA7AD9CD85455C9E450 -i 00:15:33.511 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 3ac76199-5c22-45c7-aefa-89dbea609e20 00:15:33.511 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:33.511 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 3AC761995C2245C7AEFA89DBEA609E20 -i 00:15:33.511 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:33.771 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:34.032 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:34.032 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:34.294 nvme0n1 00:15:34.294 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:34.294 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:34.555 nvme1n2 00:15:34.555 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:34.555 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:34.555 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:34.555 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:34.555 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:34.815 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:34.816 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:34.816 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:34.816 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:35.076 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 86d4e8dc-6034-4aa7-ad9c-d85455c9e450 == \8\6\d\4\e\8\d\c\-\6\0\3\4\-\4\a\a\7\-\a\d\9\c\-\d\8\5\4\5\5\c\9\e\4\5\0 ]] 00:15:35.076 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:35.076 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:35.076 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:35.076 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 3ac76199-5c22-45c7-aefa-89dbea609e20 == \3\a\c\7\6\1\9\9\-\5\c\2\2\-\4\5\c\7\-\a\e\f\a\-\8\9\d\b\e\a\6\0\9\e\2\0 ]] 00:15:35.076 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:35.337 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:35.598 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 86d4e8dc-6034-4aa7-ad9c-d85455c9e450 00:15:35.598 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:35.598 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 86D4E8DC60344AA7AD9CD85455C9E450 00:15:35.598 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:35.598 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 86D4E8DC60344AA7AD9CD85455C9E450 00:15:35.598 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:35.598 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:35.598 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:35.598 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:35.598 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:35.598 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:35.598 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:35.598 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:35.598 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 86D4E8DC60344AA7AD9CD85455C9E450 00:15:35.859 [2024-11-28 08:14:32.886236] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:15:35.859 [2024-11-28 08:14:32.886265] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:15:35.859 [2024-11-28 08:14:32.886272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.859 request: 00:15:35.859 { 00:15:35.859 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:35.859 "namespace": { 00:15:35.859 "bdev_name": "invalid", 00:15:35.859 "nsid": 1, 00:15:35.859 "nguid": "86D4E8DC60344AA7AD9CD85455C9E450", 00:15:35.859 "no_auto_visible": false, 00:15:35.859 "hide_metadata": false 00:15:35.859 }, 00:15:35.859 "method": "nvmf_subsystem_add_ns", 00:15:35.859 "req_id": 1 00:15:35.859 } 00:15:35.859 Got JSON-RPC error response 00:15:35.859 response: 00:15:35.859 { 00:15:35.859 "code": -32602, 00:15:35.859 "message": "Invalid parameters" 00:15:35.859 } 00:15:35.859 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:35.859 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:35.859 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:35.859 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:35.859 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 86d4e8dc-6034-4aa7-ad9c-d85455c9e450 00:15:35.859 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:35.859 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 86D4E8DC60344AA7AD9CD85455C9E450 -i 00:15:35.859 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:15:38.406 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:15:38.406 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:15:38.406 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:38.406 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:15:38.406 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1913376 00:15:38.406 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1913376 ']' 00:15:38.406 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1913376 00:15:38.406 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:38.406 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:38.406 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1913376 00:15:38.406 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:38.406 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:38.406 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1913376' 00:15:38.406 killing process with pid 1913376 00:15:38.406 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1913376 00:15:38.406 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1913376 00:15:38.406 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:38.406 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:15:38.406 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:15:38.407 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:38.407 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:15:38.407 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:38.407 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:15:38.407 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:38.407 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:38.407 rmmod nvme_tcp 00:15:38.667 rmmod nvme_fabrics 00:15:38.667 rmmod nvme_keyring 00:15:38.667 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:38.667 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:15:38.667 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:15:38.667 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1911032 ']' 00:15:38.667 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1911032 00:15:38.667 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1911032 ']' 00:15:38.667 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1911032 00:15:38.667 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:38.668 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:38.668 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1911032 00:15:38.668 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:38.668 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:38.668 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1911032' 00:15:38.668 killing process with pid 1911032 00:15:38.668 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1911032 00:15:38.668 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1911032 00:15:38.668 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:38.668 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:38.668 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:38.668 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:15:38.668 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:15:38.668 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:38.668 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:15:38.668 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:38.668 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:38.668 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.668 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:38.668 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:41.213 00:15:41.213 real 0m28.281s 00:15:41.213 user 0m32.137s 00:15:41.213 sys 0m8.321s 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:41.213 ************************************ 00:15:41.213 END TEST nvmf_ns_masking 00:15:41.213 ************************************ 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:41.213 ************************************ 00:15:41.213 START TEST nvmf_nvme_cli 00:15:41.213 ************************************ 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:41.213 * Looking for test storage... 00:15:41.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:41.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.213 --rc genhtml_branch_coverage=1 00:15:41.213 --rc genhtml_function_coverage=1 00:15:41.213 --rc genhtml_legend=1 00:15:41.213 --rc geninfo_all_blocks=1 00:15:41.213 --rc geninfo_unexecuted_blocks=1 00:15:41.213 00:15:41.213 ' 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:41.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.213 --rc genhtml_branch_coverage=1 00:15:41.213 --rc genhtml_function_coverage=1 00:15:41.213 --rc genhtml_legend=1 00:15:41.213 --rc geninfo_all_blocks=1 00:15:41.213 --rc geninfo_unexecuted_blocks=1 00:15:41.213 00:15:41.213 ' 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:41.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.213 --rc genhtml_branch_coverage=1 00:15:41.213 --rc genhtml_function_coverage=1 00:15:41.213 --rc genhtml_legend=1 00:15:41.213 --rc geninfo_all_blocks=1 00:15:41.213 --rc geninfo_unexecuted_blocks=1 00:15:41.213 00:15:41.213 ' 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:41.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.213 --rc genhtml_branch_coverage=1 00:15:41.213 --rc genhtml_function_coverage=1 00:15:41.213 --rc genhtml_legend=1 00:15:41.213 --rc geninfo_all_blocks=1 00:15:41.213 --rc geninfo_unexecuted_blocks=1 00:15:41.213 00:15:41.213 ' 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.213 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:41.214 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.214 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:15:41.214 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:41.214 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:41.214 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:41.214 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:41.214 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:41.214 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:41.214 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:41.214 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:41.214 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:41.214 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:41.214 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:41.214 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:41.214 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:41.214 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:41.214 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:41.214 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:41.214 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:41.214 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:41.214 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:41.214 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.214 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:41.214 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.214 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:41.214 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:41.214 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:15:41.214 08:14:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:49.360 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:49.360 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:49.360 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:49.360 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:49.360 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:49.361 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:49.361 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:49.361 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:49.361 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:49.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:49.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:15:49.361 00:15:49.361 --- 10.0.0.2 ping statistics --- 00:15:49.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.361 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:15:49.361 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:49.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:49.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:15:49.361 00:15:49.361 --- 10.0.0.1 ping statistics --- 00:15:49.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.361 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:15:49.361 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:49.361 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:15:49.361 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:49.361 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:49.361 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:49.361 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:49.361 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:49.361 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:49.361 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:49.361 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:49.361 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:49.361 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:49.361 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:49.361 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1918959 00:15:49.361 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1918959 00:15:49.361 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:49.361 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 1918959 ']' 00:15:49.361 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.361 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:49.361 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.361 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:49.361 08:14:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:49.361 [2024-11-28 08:14:45.914087] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:15:49.361 [2024-11-28 08:14:45.914153] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.361 [2024-11-28 08:14:46.014622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:49.361 [2024-11-28 08:14:46.069950] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:49.361 [2024-11-28 08:14:46.070007] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:49.361 [2024-11-28 08:14:46.070015] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:49.361 [2024-11-28 08:14:46.070022] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:49.361 [2024-11-28 08:14:46.070029] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:49.361 [2024-11-28 08:14:46.072117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.361 [2024-11-28 08:14:46.072276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:49.361 [2024-11-28 08:14:46.072330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.361 [2024-11-28 08:14:46.072330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:49.624 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:49.624 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:15:49.624 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:49.625 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:49.625 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:49.625 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:49.625 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:49.625 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.625 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:49.625 [2024-11-28 08:14:46.798188] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:49.625 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.625 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:49.625 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.625 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:49.625 Malloc0 00:15:49.625 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.625 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:49.625 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.625 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:49.625 Malloc1 00:15:49.625 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.625 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:49.625 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.625 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:49.625 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.625 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:49.625 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.625 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:49.625 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.625 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:49.625 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.625 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:49.887 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.887 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:49.887 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.887 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:49.887 [2024-11-28 08:14:46.919887] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:49.887 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.887 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:49.887 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.887 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:49.887 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.887 08:14:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:15:49.887 00:15:49.887 Discovery Log Number of Records 2, Generation counter 2 00:15:49.887 =====Discovery Log Entry 0====== 00:15:49.887 trtype: tcp 00:15:49.887 adrfam: ipv4 00:15:49.887 subtype: current discovery subsystem 00:15:49.887 treq: not required 00:15:49.887 portid: 0 00:15:49.887 trsvcid: 4420 00:15:49.887 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:49.887 traddr: 10.0.0.2 00:15:49.887 eflags: explicit discovery connections, duplicate discovery information 00:15:49.887 sectype: none 00:15:49.887 =====Discovery Log Entry 1====== 00:15:49.887 trtype: tcp 00:15:49.887 adrfam: ipv4 00:15:49.887 subtype: nvme subsystem 00:15:49.887 treq: not required 00:15:49.887 portid: 0 00:15:49.887 trsvcid: 4420 00:15:49.887 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:49.887 traddr: 10.0.0.2 00:15:49.887 eflags: none 00:15:49.887 sectype: none 00:15:49.887 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:49.887 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:49.887 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:49.887 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:49.887 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:49.887 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:49.887 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:49.887 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:49.887 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:49.887 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:49.888 08:14:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:51.808 08:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:51.808 08:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:15:51.808 08:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:51.808 08:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:51.808 08:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:51.808 08:14:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:15:53.724 /dev/nvme0n2 ]] 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:53.724 08:14:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:54.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.297 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:54.297 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:15:54.297 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:54.297 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:54.297 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:54.297 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:54.297 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:15:54.297 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:54.297 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:54.297 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.297 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:54.297 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.297 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:54.297 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:54.297 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:54.297 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:15:54.297 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:54.297 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:15:54.297 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:54.297 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:54.297 rmmod nvme_tcp 00:15:54.297 rmmod nvme_fabrics 00:15:54.297 rmmod nvme_keyring 00:15:54.297 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:54.297 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:15:54.297 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:15:54.297 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1918959 ']' 00:15:54.297 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1918959 00:15:54.297 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 1918959 ']' 00:15:54.297 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 1918959 00:15:54.297 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:15:54.297 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:54.297 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1918959 00:15:54.297 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:54.297 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:54.297 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1918959' 00:15:54.297 killing process with pid 1918959 00:15:54.297 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 1918959 00:15:54.297 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 1918959 00:15:54.559 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:54.559 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:54.559 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:54.559 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:15:54.559 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:15:54.559 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:15:54.559 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:54.559 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:54.559 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:54.559 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.559 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:54.559 08:14:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.476 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:56.476 00:15:56.476 real 0m15.604s 00:15:56.476 user 0m24.476s 00:15:56.476 sys 0m6.332s 00:15:56.476 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:56.476 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:56.476 ************************************ 00:15:56.476 END TEST nvmf_nvme_cli 00:15:56.476 ************************************ 00:15:56.476 08:14:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:15:56.476 08:14:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:56.476 08:14:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:56.476 08:14:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:56.476 08:14:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:56.739 ************************************ 00:15:56.739 START TEST nvmf_vfio_user 00:15:56.739 ************************************ 00:15:56.739 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:56.739 * Looking for test storage... 00:15:56.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:56.739 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:56.739 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:15:56.739 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:56.739 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:56.739 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:56.739 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:56.739 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:56.739 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:15:56.739 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:15:56.739 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:15:56.739 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:15:56.739 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:15:56.739 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:15:56.739 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:15:56.739 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:56.739 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:15:56.739 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:15:56.739 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:56.739 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:56.739 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:15:56.739 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:15:56.739 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:56.739 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:15:56.739 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:15:56.739 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:15:56.739 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:15:56.739 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:56.739 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:15:56.739 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:15:56.739 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:56.739 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:56.739 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:15:56.739 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:56.739 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:56.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.739 --rc genhtml_branch_coverage=1 00:15:56.739 --rc genhtml_function_coverage=1 00:15:56.739 --rc genhtml_legend=1 00:15:56.739 --rc geninfo_all_blocks=1 00:15:56.740 --rc geninfo_unexecuted_blocks=1 00:15:56.740 00:15:56.740 ' 00:15:56.740 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:56.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.740 --rc genhtml_branch_coverage=1 00:15:56.740 --rc genhtml_function_coverage=1 00:15:56.740 --rc genhtml_legend=1 00:15:56.740 --rc geninfo_all_blocks=1 00:15:56.740 --rc geninfo_unexecuted_blocks=1 00:15:56.740 00:15:56.740 ' 00:15:56.740 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:56.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.740 --rc genhtml_branch_coverage=1 00:15:56.740 --rc genhtml_function_coverage=1 00:15:56.740 --rc genhtml_legend=1 00:15:56.740 --rc geninfo_all_blocks=1 00:15:56.740 --rc geninfo_unexecuted_blocks=1 00:15:56.740 00:15:56.740 ' 00:15:56.740 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:56.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.740 --rc genhtml_branch_coverage=1 00:15:56.740 --rc genhtml_function_coverage=1 00:15:56.740 --rc genhtml_legend=1 00:15:56.740 --rc geninfo_all_blocks=1 00:15:56.740 --rc geninfo_unexecuted_blocks=1 00:15:56.740 00:15:56.740 ' 00:15:56.740 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:56.740 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:56.740 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:56.740 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:56.740 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:56.740 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:56.740 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:56.740 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:56.740 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:56.740 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:56.740 08:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:56.740 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:56.740 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:56.740 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:56.740 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:56.740 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:56.740 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:56.740 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:56.740 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:56.740 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:15:56.740 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:56.740 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:56.740 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:56.740 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.740 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.740 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.740 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:56.740 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.740 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:15:56.740 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:56.740 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:56.740 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:56.740 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:56.740 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:56.740 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:56.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:56.740 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:56.740 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:56.740 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:57.002 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:57.002 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:57.002 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:57.002 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:57.002 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:57.002 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:57.002 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:57.002 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:57.002 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:57.002 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:57.002 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1920583 00:15:57.002 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1920583' 00:15:57.002 Process pid: 1920583 00:15:57.002 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:57.002 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1920583 00:15:57.002 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1920583 ']' 00:15:57.002 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:57.002 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.002 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:57.002 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.002 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:57.002 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:57.002 [2024-11-28 08:14:54.089131] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:15:57.002 [2024-11-28 08:14:54.089207] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.002 [2024-11-28 08:14:54.176472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:57.002 [2024-11-28 08:14:54.211788] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:57.002 [2024-11-28 08:14:54.211821] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:57.002 [2024-11-28 08:14:54.211828] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:57.002 [2024-11-28 08:14:54.211833] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:57.002 [2024-11-28 08:14:54.211837] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:57.002 [2024-11-28 08:14:54.213205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.002 [2024-11-28 08:14:54.213437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.003 [2024-11-28 08:14:54.213260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:57.003 [2024-11-28 08:14:54.213439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:57.944 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:57.944 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:57.944 08:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:58.888 08:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:58.888 08:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:58.888 08:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:58.888 08:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:58.888 08:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:58.888 08:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:59.149 Malloc1 00:15:59.149 08:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:59.410 08:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:59.410 08:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:59.671 08:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:59.671 08:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:59.671 08:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:59.932 Malloc2 00:15:59.932 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:59.932 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:00.193 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:00.457 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:00.457 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:00.457 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:00.457 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:00.457 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:00.457 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:00.457 [2024-11-28 08:14:57.598946] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:16:00.457 [2024-11-28 08:14:57.599016] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1921280 ] 00:16:00.457 [2024-11-28 08:14:57.639528] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:00.457 [2024-11-28 08:14:57.644839] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:00.457 [2024-11-28 08:14:57.644860] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9584e29000 00:16:00.457 [2024-11-28 08:14:57.645838] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:00.457 [2024-11-28 08:14:57.646845] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:00.457 [2024-11-28 08:14:57.647848] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:00.457 [2024-11-28 08:14:57.648853] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:00.457 [2024-11-28 08:14:57.649853] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:00.457 [2024-11-28 08:14:57.650859] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:00.457 [2024-11-28 08:14:57.651866] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:00.457 [2024-11-28 08:14:57.652870] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:00.457 [2024-11-28 08:14:57.653877] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:00.457 [2024-11-28 08:14:57.653885] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9584e1e000 00:16:00.457 [2024-11-28 08:14:57.654799] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:00.457 [2024-11-28 08:14:57.664266] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:00.457 [2024-11-28 08:14:57.664288] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:16:00.457 [2024-11-28 08:14:57.669996] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:00.457 [2024-11-28 08:14:57.670033] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:00.457 [2024-11-28 08:14:57.670095] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:16:00.457 [2024-11-28 08:14:57.670108] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:16:00.457 [2024-11-28 08:14:57.670113] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:16:00.458 [2024-11-28 08:14:57.670993] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:00.458 [2024-11-28 08:14:57.671003] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:16:00.458 [2024-11-28 08:14:57.671008] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:16:00.458 [2024-11-28 08:14:57.671998] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:00.458 [2024-11-28 08:14:57.672006] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:16:00.458 [2024-11-28 08:14:57.672012] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:00.458 [2024-11-28 08:14:57.673001] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:00.458 [2024-11-28 08:14:57.673008] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:00.458 [2024-11-28 08:14:57.674010] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:00.458 [2024-11-28 08:14:57.674016] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:00.458 [2024-11-28 08:14:57.674020] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:00.458 [2024-11-28 08:14:57.674025] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:00.458 [2024-11-28 08:14:57.674131] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:16:00.458 [2024-11-28 08:14:57.674137] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:00.458 [2024-11-28 08:14:57.674141] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:00.458 [2024-11-28 08:14:57.675018] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:00.458 [2024-11-28 08:14:57.676022] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:00.458 [2024-11-28 08:14:57.677028] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:00.458 [2024-11-28 08:14:57.678028] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:00.458 [2024-11-28 08:14:57.678084] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:00.458 [2024-11-28 08:14:57.679036] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:00.458 [2024-11-28 08:14:57.679042] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:00.458 [2024-11-28 08:14:57.679046] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:00.458 [2024-11-28 08:14:57.679060] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:16:00.458 [2024-11-28 08:14:57.679069] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:00.458 [2024-11-28 08:14:57.679084] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:00.458 [2024-11-28 08:14:57.679087] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:00.458 [2024-11-28 08:14:57.679090] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:00.458 [2024-11-28 08:14:57.679100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:00.458 [2024-11-28 08:14:57.679134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:00.458 [2024-11-28 08:14:57.679141] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:16:00.458 [2024-11-28 08:14:57.679145] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:16:00.458 [2024-11-28 08:14:57.679148] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:16:00.458 [2024-11-28 08:14:57.679151] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:00.458 [2024-11-28 08:14:57.679155] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:16:00.458 [2024-11-28 08:14:57.679162] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:16:00.458 [2024-11-28 08:14:57.679168] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:16:00.458 [2024-11-28 08:14:57.679176] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:00.458 [2024-11-28 08:14:57.679189] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:00.458 [2024-11-28 08:14:57.679198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:00.458 [2024-11-28 08:14:57.679207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.458 [2024-11-28 08:14:57.679213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.458 [2024-11-28 08:14:57.679219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.458 [2024-11-28 08:14:57.679225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:00.458 [2024-11-28 08:14:57.679228] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:00.458 [2024-11-28 08:14:57.679235] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:00.458 [2024-11-28 08:14:57.679242] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:00.458 [2024-11-28 08:14:57.679250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:00.458 [2024-11-28 08:14:57.679254] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:16:00.458 [2024-11-28 08:14:57.679257] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:00.458 [2024-11-28 08:14:57.679264] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:16:00.458 [2024-11-28 08:14:57.679268] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:00.458 [2024-11-28 08:14:57.679274] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:00.458 [2024-11-28 08:14:57.679287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:00.458 [2024-11-28 08:14:57.679333] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:16:00.458 [2024-11-28 08:14:57.679339] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:00.458 [2024-11-28 08:14:57.679344] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:00.458 [2024-11-28 08:14:57.679347] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:00.458 [2024-11-28 08:14:57.679350] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:00.458 [2024-11-28 08:14:57.679355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:00.458 [2024-11-28 08:14:57.679367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:00.458 [2024-11-28 08:14:57.679375] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:16:00.458 [2024-11-28 08:14:57.679382] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:16:00.458 [2024-11-28 08:14:57.679389] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:00.458 [2024-11-28 08:14:57.679394] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:00.458 [2024-11-28 08:14:57.679397] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:00.458 [2024-11-28 08:14:57.679399] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:00.458 [2024-11-28 08:14:57.679404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:00.458 [2024-11-28 08:14:57.679424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:00.458 [2024-11-28 08:14:57.679432] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:00.458 [2024-11-28 08:14:57.679437] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:00.458 [2024-11-28 08:14:57.679442] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:00.458 [2024-11-28 08:14:57.679445] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:00.458 [2024-11-28 08:14:57.679447] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:00.458 [2024-11-28 08:14:57.679452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:00.458 [2024-11-28 08:14:57.679465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:00.458 [2024-11-28 08:14:57.679472] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:00.458 [2024-11-28 08:14:57.679478] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:00.458 [2024-11-28 08:14:57.679485] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:16:00.459 [2024-11-28 08:14:57.679490] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:00.459 [2024-11-28 08:14:57.679495] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:00.459 [2024-11-28 08:14:57.679499] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:16:00.459 [2024-11-28 08:14:57.679504] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:00.459 [2024-11-28 08:14:57.679508] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:16:00.459 [2024-11-28 08:14:57.679511] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:16:00.459 [2024-11-28 08:14:57.679525] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:00.459 [2024-11-28 08:14:57.679535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:00.459 [2024-11-28 08:14:57.679543] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:00.459 [2024-11-28 08:14:57.679549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:00.459 [2024-11-28 08:14:57.679557] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:00.459 [2024-11-28 08:14:57.679569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:00.459 [2024-11-28 08:14:57.679577] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:00.459 [2024-11-28 08:14:57.679584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:00.459 [2024-11-28 08:14:57.679593] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:00.459 [2024-11-28 08:14:57.679596] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:00.459 [2024-11-28 08:14:57.679599] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:00.459 [2024-11-28 08:14:57.679601] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:00.459 [2024-11-28 08:14:57.679604] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:00.459 [2024-11-28 08:14:57.679608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:00.459 [2024-11-28 08:14:57.679614] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:00.459 [2024-11-28 08:14:57.679617] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:00.459 [2024-11-28 08:14:57.679619] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:00.459 [2024-11-28 08:14:57.679623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:00.459 [2024-11-28 08:14:57.679629] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:00.459 [2024-11-28 08:14:57.679632] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:00.459 [2024-11-28 08:14:57.679634] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:00.459 [2024-11-28 08:14:57.679638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:00.459 [2024-11-28 08:14:57.679644] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:00.459 [2024-11-28 08:14:57.679647] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:00.459 [2024-11-28 08:14:57.679649] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:00.459 [2024-11-28 08:14:57.679653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:00.459 [2024-11-28 08:14:57.679658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:00.459 [2024-11-28 08:14:57.679667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:00.459 [2024-11-28 08:14:57.679675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:00.459 [2024-11-28 08:14:57.679680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:00.459 ===================================================== 00:16:00.459 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:00.459 ===================================================== 00:16:00.459 Controller Capabilities/Features 00:16:00.459 ================================ 00:16:00.459 Vendor ID: 4e58 00:16:00.459 Subsystem Vendor ID: 4e58 00:16:00.459 Serial Number: SPDK1 00:16:00.459 Model Number: SPDK bdev Controller 00:16:00.459 Firmware Version: 25.01 00:16:00.459 Recommended Arb Burst: 6 00:16:00.459 IEEE OUI Identifier: 8d 6b 50 00:16:00.459 Multi-path I/O 00:16:00.459 May have multiple subsystem ports: Yes 00:16:00.459 May have multiple controllers: Yes 00:16:00.459 Associated with SR-IOV VF: No 00:16:00.459 Max Data Transfer Size: 131072 00:16:00.459 Max Number of Namespaces: 32 00:16:00.459 Max Number of I/O Queues: 127 00:16:00.459 NVMe Specification Version (VS): 1.3 00:16:00.459 NVMe Specification Version (Identify): 1.3 00:16:00.459 Maximum Queue Entries: 256 00:16:00.459 Contiguous Queues Required: Yes 00:16:00.459 Arbitration Mechanisms Supported 00:16:00.459 Weighted Round Robin: Not Supported 00:16:00.459 Vendor Specific: Not Supported 00:16:00.459 Reset Timeout: 15000 ms 00:16:00.459 Doorbell Stride: 4 bytes 00:16:00.459 NVM Subsystem Reset: Not Supported 00:16:00.459 Command Sets Supported 00:16:00.459 NVM Command Set: Supported 00:16:00.459 Boot Partition: Not Supported 00:16:00.459 Memory Page Size Minimum: 4096 bytes 00:16:00.459 Memory Page Size Maximum: 4096 bytes 00:16:00.459 Persistent Memory Region: Not Supported 00:16:00.459 Optional Asynchronous Events Supported 00:16:00.459 Namespace Attribute Notices: Supported 00:16:00.459 Firmware Activation Notices: Not Supported 00:16:00.459 ANA Change Notices: Not Supported 00:16:00.459 PLE Aggregate Log Change Notices: Not Supported 00:16:00.459 LBA Status Info Alert Notices: Not Supported 00:16:00.459 EGE Aggregate Log Change Notices: Not Supported 00:16:00.459 Normal NVM Subsystem Shutdown event: Not Supported 00:16:00.459 Zone Descriptor Change Notices: Not Supported 00:16:00.459 Discovery Log Change Notices: Not Supported 00:16:00.459 Controller Attributes 00:16:00.459 128-bit Host Identifier: Supported 00:16:00.459 Non-Operational Permissive Mode: Not Supported 00:16:00.459 NVM Sets: Not Supported 00:16:00.459 Read Recovery Levels: Not Supported 00:16:00.459 Endurance Groups: Not Supported 00:16:00.459 Predictable Latency Mode: Not Supported 00:16:00.459 Traffic Based Keep ALive: Not Supported 00:16:00.459 Namespace Granularity: Not Supported 00:16:00.459 SQ Associations: Not Supported 00:16:00.459 UUID List: Not Supported 00:16:00.459 Multi-Domain Subsystem: Not Supported 00:16:00.459 Fixed Capacity Management: Not Supported 00:16:00.459 Variable Capacity Management: Not Supported 00:16:00.459 Delete Endurance Group: Not Supported 00:16:00.459 Delete NVM Set: Not Supported 00:16:00.459 Extended LBA Formats Supported: Not Supported 00:16:00.459 Flexible Data Placement Supported: Not Supported 00:16:00.459 00:16:00.459 Controller Memory Buffer Support 00:16:00.459 ================================ 00:16:00.459 Supported: No 00:16:00.459 00:16:00.459 Persistent Memory Region Support 00:16:00.459 ================================ 00:16:00.459 Supported: No 00:16:00.459 00:16:00.459 Admin Command Set Attributes 00:16:00.459 ============================ 00:16:00.459 Security Send/Receive: Not Supported 00:16:00.460 Format NVM: Not Supported 00:16:00.460 Firmware Activate/Download: Not Supported 00:16:00.460 Namespace Management: Not Supported 00:16:00.460 Device Self-Test: Not Supported 00:16:00.460 Directives: Not Supported 00:16:00.460 NVMe-MI: Not Supported 00:16:00.460 Virtualization Management: Not Supported 00:16:00.460 Doorbell Buffer Config: Not Supported 00:16:00.460 Get LBA Status Capability: Not Supported 00:16:00.460 Command & Feature Lockdown Capability: Not Supported 00:16:00.460 Abort Command Limit: 4 00:16:00.460 Async Event Request Limit: 4 00:16:00.460 Number of Firmware Slots: N/A 00:16:00.460 Firmware Slot 1 Read-Only: N/A 00:16:00.460 Firmware Activation Without Reset: N/A 00:16:00.460 Multiple Update Detection Support: N/A 00:16:00.460 Firmware Update Granularity: No Information Provided 00:16:00.460 Per-Namespace SMART Log: No 00:16:00.460 Asymmetric Namespace Access Log Page: Not Supported 00:16:00.460 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:00.460 Command Effects Log Page: Supported 00:16:00.460 Get Log Page Extended Data: Supported 00:16:00.460 Telemetry Log Pages: Not Supported 00:16:00.460 Persistent Event Log Pages: Not Supported 00:16:00.460 Supported Log Pages Log Page: May Support 00:16:00.460 Commands Supported & Effects Log Page: Not Supported 00:16:00.460 Feature Identifiers & Effects Log Page:May Support 00:16:00.460 NVMe-MI Commands & Effects Log Page: May Support 00:16:00.460 Data Area 4 for Telemetry Log: Not Supported 00:16:00.460 Error Log Page Entries Supported: 128 00:16:00.460 Keep Alive: Supported 00:16:00.460 Keep Alive Granularity: 10000 ms 00:16:00.460 00:16:00.460 NVM Command Set Attributes 00:16:00.460 ========================== 00:16:00.460 Submission Queue Entry Size 00:16:00.460 Max: 64 00:16:00.460 Min: 64 00:16:00.460 Completion Queue Entry Size 00:16:00.460 Max: 16 00:16:00.460 Min: 16 00:16:00.460 Number of Namespaces: 32 00:16:00.460 Compare Command: Supported 00:16:00.460 Write Uncorrectable Command: Not Supported 00:16:00.460 Dataset Management Command: Supported 00:16:00.460 Write Zeroes Command: Supported 00:16:00.460 Set Features Save Field: Not Supported 00:16:00.460 Reservations: Not Supported 00:16:00.460 Timestamp: Not Supported 00:16:00.460 Copy: Supported 00:16:00.460 Volatile Write Cache: Present 00:16:00.460 Atomic Write Unit (Normal): 1 00:16:00.460 Atomic Write Unit (PFail): 1 00:16:00.460 Atomic Compare & Write Unit: 1 00:16:00.460 Fused Compare & Write: Supported 00:16:00.460 Scatter-Gather List 00:16:00.460 SGL Command Set: Supported (Dword aligned) 00:16:00.460 SGL Keyed: Not Supported 00:16:00.460 SGL Bit Bucket Descriptor: Not Supported 00:16:00.460 SGL Metadata Pointer: Not Supported 00:16:00.460 Oversized SGL: Not Supported 00:16:00.460 SGL Metadata Address: Not Supported 00:16:00.460 SGL Offset: Not Supported 00:16:00.460 Transport SGL Data Block: Not Supported 00:16:00.460 Replay Protected Memory Block: Not Supported 00:16:00.460 00:16:00.460 Firmware Slot Information 00:16:00.460 ========================= 00:16:00.460 Active slot: 1 00:16:00.460 Slot 1 Firmware Revision: 25.01 00:16:00.460 00:16:00.460 00:16:00.460 Commands Supported and Effects 00:16:00.460 ============================== 00:16:00.460 Admin Commands 00:16:00.460 -------------- 00:16:00.460 Get Log Page (02h): Supported 00:16:00.460 Identify (06h): Supported 00:16:00.460 Abort (08h): Supported 00:16:00.460 Set Features (09h): Supported 00:16:00.460 Get Features (0Ah): Supported 00:16:00.460 Asynchronous Event Request (0Ch): Supported 00:16:00.460 Keep Alive (18h): Supported 00:16:00.460 I/O Commands 00:16:00.460 ------------ 00:16:00.460 Flush (00h): Supported LBA-Change 00:16:00.460 Write (01h): Supported LBA-Change 00:16:00.460 Read (02h): Supported 00:16:00.460 Compare (05h): Supported 00:16:00.460 Write Zeroes (08h): Supported LBA-Change 00:16:00.460 Dataset Management (09h): Supported LBA-Change 00:16:00.460 Copy (19h): Supported LBA-Change 00:16:00.460 00:16:00.460 Error Log 00:16:00.460 ========= 00:16:00.460 00:16:00.460 Arbitration 00:16:00.460 =========== 00:16:00.460 Arbitration Burst: 1 00:16:00.460 00:16:00.460 Power Management 00:16:00.460 ================ 00:16:00.460 Number of Power States: 1 00:16:00.460 Current Power State: Power State #0 00:16:00.460 Power State #0: 00:16:00.460 Max Power: 0.00 W 00:16:00.460 Non-Operational State: Operational 00:16:00.460 Entry Latency: Not Reported 00:16:00.460 Exit Latency: Not Reported 00:16:00.460 Relative Read Throughput: 0 00:16:00.460 Relative Read Latency: 0 00:16:00.460 Relative Write Throughput: 0 00:16:00.460 Relative Write Latency: 0 00:16:00.460 Idle Power: Not Reported 00:16:00.460 Active Power: Not Reported 00:16:00.460 Non-Operational Permissive Mode: Not Supported 00:16:00.460 00:16:00.460 Health Information 00:16:00.460 ================== 00:16:00.460 Critical Warnings: 00:16:00.460 Available Spare Space: OK 00:16:00.460 Temperature: OK 00:16:00.460 Device Reliability: OK 00:16:00.460 Read Only: No 00:16:00.460 Volatile Memory Backup: OK 00:16:00.460 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:00.460 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:00.460 Available Spare: 0% 00:16:00.460 Available Sp[2024-11-28 08:14:57.679754] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:00.460 [2024-11-28 08:14:57.679762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:00.460 [2024-11-28 08:14:57.679786] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:16:00.460 [2024-11-28 08:14:57.679792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.460 [2024-11-28 08:14:57.679797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.460 [2024-11-28 08:14:57.679802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.460 [2024-11-28 08:14:57.679806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:00.460 [2024-11-28 08:14:57.680046] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:00.460 [2024-11-28 08:14:57.680054] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:00.460 [2024-11-28 08:14:57.681045] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:00.460 [2024-11-28 08:14:57.681084] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:16:00.460 [2024-11-28 08:14:57.681091] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:16:00.460 [2024-11-28 08:14:57.682052] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:00.460 [2024-11-28 08:14:57.682060] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:16:00.460 [2024-11-28 08:14:57.682119] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:00.460 [2024-11-28 08:14:57.684166] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:00.460 are Threshold: 0% 00:16:00.460 Life Percentage Used: 0% 00:16:00.460 Data Units Read: 0 00:16:00.460 Data Units Written: 0 00:16:00.460 Host Read Commands: 0 00:16:00.460 Host Write Commands: 0 00:16:00.460 Controller Busy Time: 0 minutes 00:16:00.460 Power Cycles: 0 00:16:00.460 Power On Hours: 0 hours 00:16:00.460 Unsafe Shutdowns: 0 00:16:00.460 Unrecoverable Media Errors: 0 00:16:00.460 Lifetime Error Log Entries: 0 00:16:00.460 Warning Temperature Time: 0 minutes 00:16:00.461 Critical Temperature Time: 0 minutes 00:16:00.461 00:16:00.461 Number of Queues 00:16:00.461 ================ 00:16:00.461 Number of I/O Submission Queues: 127 00:16:00.461 Number of I/O Completion Queues: 127 00:16:00.461 00:16:00.461 Active Namespaces 00:16:00.461 ================= 00:16:00.461 Namespace ID:1 00:16:00.461 Error Recovery Timeout: Unlimited 00:16:00.461 Command Set Identifier: NVM (00h) 00:16:00.461 Deallocate: Supported 00:16:00.461 Deallocated/Unwritten Error: Not Supported 00:16:00.461 Deallocated Read Value: Unknown 00:16:00.461 Deallocate in Write Zeroes: Not Supported 00:16:00.461 Deallocated Guard Field: 0xFFFF 00:16:00.461 Flush: Supported 00:16:00.461 Reservation: Supported 00:16:00.461 Namespace Sharing Capabilities: Multiple Controllers 00:16:00.461 Size (in LBAs): 131072 (0GiB) 00:16:00.461 Capacity (in LBAs): 131072 (0GiB) 00:16:00.461 Utilization (in LBAs): 131072 (0GiB) 00:16:00.461 NGUID: E3B177FC5FED438CA10280CCB1E6257B 00:16:00.461 UUID: e3b177fc-5fed-438c-a102-80ccb1e6257b 00:16:00.461 Thin Provisioning: Not Supported 00:16:00.461 Per-NS Atomic Units: Yes 00:16:00.461 Atomic Boundary Size (Normal): 0 00:16:00.461 Atomic Boundary Size (PFail): 0 00:16:00.461 Atomic Boundary Offset: 0 00:16:00.461 Maximum Single Source Range Length: 65535 00:16:00.461 Maximum Copy Length: 65535 00:16:00.461 Maximum Source Range Count: 1 00:16:00.461 NGUID/EUI64 Never Reused: No 00:16:00.461 Namespace Write Protected: No 00:16:00.461 Number of LBA Formats: 1 00:16:00.461 Current LBA Format: LBA Format #00 00:16:00.461 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:00.461 00:16:00.461 08:14:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:00.722 [2024-11-28 08:14:57.881932] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:06.009 Initializing NVMe Controllers 00:16:06.009 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:06.009 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:06.009 Initialization complete. Launching workers. 00:16:06.009 ======================================================== 00:16:06.009 Latency(us) 00:16:06.009 Device Information : IOPS MiB/s Average min max 00:16:06.009 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40052.93 156.46 3196.01 869.72 7518.56 00:16:06.009 ======================================================== 00:16:06.009 Total : 40052.93 156.46 3196.01 869.72 7518.56 00:16:06.009 00:16:06.009 [2024-11-28 08:15:02.902373] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:06.010 08:15:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:06.010 [2024-11-28 08:15:03.096228] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:11.296 Initializing NVMe Controllers 00:16:11.296 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:11.296 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:11.296 Initialization complete. Launching workers. 00:16:11.296 ======================================================== 00:16:11.296 Latency(us) 00:16:11.296 Device Information : IOPS MiB/s Average min max 00:16:11.296 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16035.12 62.64 7987.24 5217.97 14931.23 00:16:11.296 ======================================================== 00:16:11.296 Total : 16035.12 62.64 7987.24 5217.97 14931.23 00:16:11.296 00:16:11.296 [2024-11-28 08:15:08.137146] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:11.296 08:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:11.296 [2024-11-28 08:15:08.338019] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:16.584 [2024-11-28 08:15:13.413370] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:16.584 Initializing NVMe Controllers 00:16:16.584 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:16.584 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:16.584 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:16.584 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:16.584 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:16.584 Initialization complete. Launching workers. 00:16:16.584 Starting thread on core 2 00:16:16.584 Starting thread on core 3 00:16:16.584 Starting thread on core 1 00:16:16.584 08:15:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:16.584 [2024-11-28 08:15:13.667549] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:19.893 [2024-11-28 08:15:16.739817] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:19.893 Initializing NVMe Controllers 00:16:19.893 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:19.893 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:19.893 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:19.893 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:19.893 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:19.893 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:19.893 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:19.893 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:19.893 Initialization complete. Launching workers. 00:16:19.893 Starting thread on core 1 with urgent priority queue 00:16:19.893 Starting thread on core 2 with urgent priority queue 00:16:19.893 Starting thread on core 3 with urgent priority queue 00:16:19.893 Starting thread on core 0 with urgent priority queue 00:16:19.893 SPDK bdev Controller (SPDK1 ) core 0: 10029.67 IO/s 9.97 secs/100000 ios 00:16:19.893 SPDK bdev Controller (SPDK1 ) core 1: 11177.00 IO/s 8.95 secs/100000 ios 00:16:19.893 SPDK bdev Controller (SPDK1 ) core 2: 11473.67 IO/s 8.72 secs/100000 ios 00:16:19.893 SPDK bdev Controller (SPDK1 ) core 3: 10802.00 IO/s 9.26 secs/100000 ios 00:16:19.893 ======================================================== 00:16:19.893 00:16:19.893 08:15:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:19.893 [2024-11-28 08:15:16.983569] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:19.893 Initializing NVMe Controllers 00:16:19.893 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:19.893 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:19.893 Namespace ID: 1 size: 0GB 00:16:19.893 Initialization complete. 00:16:19.893 INFO: using host memory buffer for IO 00:16:19.893 Hello world! 00:16:19.893 [2024-11-28 08:15:17.018802] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:19.893 08:15:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:20.153 [2024-11-28 08:15:17.251486] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:21.096 Initializing NVMe Controllers 00:16:21.096 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:21.096 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:21.096 Initialization complete. Launching workers. 00:16:21.096 submit (in ns) avg, min, max = 5209.9, 2831.7, 4000517.5 00:16:21.096 complete (in ns) avg, min, max = 17350.2, 1641.7, 3998561.7 00:16:21.096 00:16:21.096 Submit histogram 00:16:21.096 ================ 00:16:21.096 Range in us Cumulative Count 00:16:21.096 2.827 - 2.840: 0.1309% ( 26) 00:16:21.096 2.840 - 2.853: 0.3173% ( 37) 00:16:21.096 2.853 - 2.867: 1.3196% ( 199) 00:16:21.096 2.867 - 2.880: 3.2536% ( 384) 00:16:21.096 2.880 - 2.893: 7.0813% ( 760) 00:16:21.096 2.893 - 2.907: 12.0071% ( 978) 00:16:21.096 2.907 - 2.920: 17.9048% ( 1171) 00:16:21.096 2.920 - 2.933: 24.3364% ( 1277) 00:16:21.096 2.933 - 2.947: 29.6349% ( 1052) 00:16:21.096 2.947 - 2.960: 35.2506% ( 1115) 00:16:21.096 2.960 - 2.973: 40.5943% ( 1061) 00:16:21.096 2.973 - 2.987: 45.8121% ( 1036) 00:16:21.096 2.987 - 3.000: 51.5387% ( 1137) 00:16:21.096 3.000 - 3.013: 58.5898% ( 1400) 00:16:21.096 3.013 - 3.027: 67.2022% ( 1710) 00:16:21.096 3.027 - 3.040: 75.5024% ( 1648) 00:16:21.096 3.040 - 3.053: 82.9715% ( 1483) 00:16:21.096 3.053 - 3.067: 89.6197% ( 1320) 00:16:21.096 3.067 - 3.080: 94.3037% ( 930) 00:16:21.097 3.080 - 3.093: 96.9076% ( 517) 00:16:21.097 3.093 - 3.107: 98.2372% ( 264) 00:16:21.097 3.107 - 3.120: 98.8366% ( 119) 00:16:21.097 3.120 - 3.133: 99.1589% ( 64) 00:16:21.097 3.133 - 3.147: 99.3906% ( 46) 00:16:21.097 3.147 - 3.160: 99.5064% ( 23) 00:16:21.097 3.160 - 3.173: 99.5820% ( 15) 00:16:21.097 3.173 - 3.187: 99.6223% ( 8) 00:16:21.097 3.253 - 3.267: 99.6273% ( 1) 00:16:21.097 3.400 - 3.413: 99.6323% ( 1) 00:16:21.097 3.600 - 3.627: 99.6374% ( 1) 00:16:21.097 3.680 - 3.707: 99.6424% ( 1) 00:16:21.097 3.867 - 3.893: 99.6474% ( 1) 00:16:21.097 3.973 - 4.000: 99.6525% ( 1) 00:16:21.097 4.587 - 4.613: 99.6575% ( 1) 00:16:21.097 4.667 - 4.693: 99.6626% ( 1) 00:16:21.097 4.720 - 4.747: 99.6676% ( 1) 00:16:21.097 4.773 - 4.800: 99.6726% ( 1) 00:16:21.097 4.853 - 4.880: 99.6777% ( 1) 00:16:21.097 4.960 - 4.987: 99.6827% ( 1) 00:16:21.097 4.987 - 5.013: 99.6978% ( 3) 00:16:21.097 5.013 - 5.040: 99.7028% ( 1) 00:16:21.097 5.120 - 5.147: 99.7079% ( 1) 00:16:21.097 5.147 - 5.173: 99.7129% ( 1) 00:16:21.097 5.200 - 5.227: 99.7180% ( 1) 00:16:21.097 5.253 - 5.280: 99.7331% ( 3) 00:16:21.097 5.360 - 5.387: 99.7381% ( 1) 00:16:21.097 5.387 - 5.413: 99.7431% ( 1) 00:16:21.097 5.413 - 5.440: 99.7482% ( 1) 00:16:21.097 5.493 - 5.520: 99.7532% ( 1) 00:16:21.097 5.520 - 5.547: 99.7633% ( 2) 00:16:21.097 5.600 - 5.627: 99.7683% ( 1) 00:16:21.097 5.627 - 5.653: 99.7734% ( 1) 00:16:21.097 5.707 - 5.733: 99.7784% ( 1) 00:16:21.097 5.733 - 5.760: 99.7834% ( 1) 00:16:21.097 5.760 - 5.787: 99.7885% ( 1) 00:16:21.097 5.813 - 5.840: 99.7985% ( 2) 00:16:21.097 5.840 - 5.867: 99.8086% ( 2) 00:16:21.097 5.867 - 5.893: 99.8187% ( 2) 00:16:21.097 5.973 - 6.000: 99.8288% ( 2) 00:16:21.097 6.000 - 6.027: 99.8388% ( 2) 00:16:21.097 6.027 - 6.053: 99.8439% ( 1) 00:16:21.097 6.080 - 6.107: 99.8489% ( 1) 00:16:21.097 6.133 - 6.160: 99.8539% ( 1) 00:16:21.097 6.187 - 6.213: 99.8590% ( 1) 00:16:21.097 6.240 - 6.267: 99.8691% ( 2) 00:16:21.097 6.267 - 6.293: 99.8741% ( 1) 00:16:21.097 6.293 - 6.320: 99.8892% ( 3) 00:16:21.097 6.320 - 6.347: 99.8942% ( 1) 00:16:21.097 6.373 - 6.400: 99.8993% ( 1) 00:16:21.097 6.427 - 6.453: 99.9043% ( 1) 00:16:21.097 6.480 - 6.507: 99.9093% ( 1) 00:16:21.097 6.507 - 6.533: 99.9194% ( 2) 00:16:21.097 6.560 - 6.587: 99.9245% ( 1) 00:16:21.097 6.773 - 6.800: 99.9295% ( 1) 00:16:21.097 6.987 - 7.040: 99.9345% ( 1) 00:16:21.097 7.200 - 7.253: 99.9396% ( 1) 00:16:21.097 7.413 - 7.467: 99.9446% ( 1) 00:16:21.097 3986.773 - 4014.080: 100.0000% ( 11) 00:16:21.097 00:16:21.097 Complete histogram 00:16:21.097 ================== 00:16:21.097 Ra[2024-11-28 08:15:18.272113] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:21.097 nge in us Cumulative Count 00:16:21.097 1.640 - 1.647: 0.1058% ( 21) 00:16:21.097 1.647 - 1.653: 0.9670% ( 171) 00:16:21.097 1.653 - 1.660: 1.0829% ( 23) 00:16:21.097 1.660 - 1.667: 1.1886% ( 21) 00:16:21.097 1.667 - 1.673: 1.2642% ( 15) 00:16:21.097 1.673 - 1.680: 1.3196% ( 11) 00:16:21.097 1.680 - 1.687: 1.3397% ( 4) 00:16:21.097 1.687 - 1.693: 1.3498% ( 2) 00:16:21.097 1.693 - 1.700: 1.3548% ( 1) 00:16:21.097 1.700 - 1.707: 1.3750% ( 4) 00:16:21.097 1.707 - 1.720: 40.3727% ( 7743) 00:16:21.097 1.720 - 1.733: 66.5777% ( 5203) 00:16:21.097 1.733 - 1.747: 79.3704% ( 2540) 00:16:21.097 1.747 - 1.760: 83.1982% ( 760) 00:16:21.097 1.760 - 1.773: 84.3667% ( 232) 00:16:21.097 1.773 - 1.787: 89.2974% ( 979) 00:16:21.097 1.787 - 1.800: 94.9333% ( 1119) 00:16:21.097 1.800 - 1.813: 97.9804% ( 605) 00:16:21.097 1.813 - 1.827: 99.1035% ( 223) 00:16:21.097 1.827 - 1.840: 99.3755% ( 54) 00:16:21.097 1.840 - 1.853: 99.4158% ( 8) 00:16:21.097 1.853 - 1.867: 99.4309% ( 3) 00:16:21.097 1.867 - 1.880: 99.4359% ( 1) 00:16:21.097 1.920 - 1.933: 99.4409% ( 1) 00:16:21.097 1.933 - 1.947: 99.4460% ( 1) 00:16:21.097 3.493 - 3.520: 99.4510% ( 1) 00:16:21.097 3.760 - 3.787: 99.4561% ( 1) 00:16:21.097 3.973 - 4.000: 99.4611% ( 1) 00:16:21.097 4.027 - 4.053: 99.4661% ( 1) 00:16:21.097 4.080 - 4.107: 99.4762% ( 2) 00:16:21.097 4.213 - 4.240: 99.4812% ( 1) 00:16:21.097 4.267 - 4.293: 99.4863% ( 1) 00:16:21.097 4.293 - 4.320: 99.4913% ( 1) 00:16:21.097 4.347 - 4.373: 99.4963% ( 1) 00:16:21.097 4.427 - 4.453: 99.5014% ( 1) 00:16:21.097 4.613 - 4.640: 99.5064% ( 1) 00:16:21.097 4.667 - 4.693: 99.5115% ( 1) 00:16:21.097 4.720 - 4.747: 99.5165% ( 1) 00:16:21.097 4.747 - 4.773: 99.5215% ( 1) 00:16:21.097 4.800 - 4.827: 99.5266% ( 1) 00:16:21.097 4.853 - 4.880: 99.5316% ( 1) 00:16:21.097 4.880 - 4.907: 99.5467% ( 3) 00:16:21.097 5.013 - 5.040: 99.5518% ( 1) 00:16:21.097 5.067 - 5.093: 99.5618% ( 2) 00:16:21.097 5.307 - 5.333: 99.5669% ( 1) 00:16:21.097 5.360 - 5.387: 99.5719% ( 1) 00:16:21.097 5.440 - 5.467: 99.5769% ( 1) 00:16:21.097 5.707 - 5.733: 99.5820% ( 1) 00:16:21.097 7.040 - 7.093: 99.5870% ( 1) 00:16:21.097 9.280 - 9.333: 99.5920% ( 1) 00:16:21.097 10.400 - 10.453: 99.5971% ( 1) 00:16:21.097 33.707 - 33.920: 99.6021% ( 1) 00:16:21.097 34.133 - 34.347: 99.6072% ( 1) 00:16:21.097 2321.067 - 2334.720: 99.6122% ( 1) 00:16:21.097 3986.773 - 4014.080: 100.0000% ( 77) 00:16:21.097 00:16:21.097 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:21.097 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:21.097 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:21.097 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:21.097 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:21.358 [ 00:16:21.358 { 00:16:21.358 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:21.358 "subtype": "Discovery", 00:16:21.358 "listen_addresses": [], 00:16:21.358 "allow_any_host": true, 00:16:21.358 "hosts": [] 00:16:21.358 }, 00:16:21.358 { 00:16:21.358 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:21.358 "subtype": "NVMe", 00:16:21.358 "listen_addresses": [ 00:16:21.358 { 00:16:21.358 "trtype": "VFIOUSER", 00:16:21.358 "adrfam": "IPv4", 00:16:21.358 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:21.358 "trsvcid": "0" 00:16:21.358 } 00:16:21.358 ], 00:16:21.358 "allow_any_host": true, 00:16:21.358 "hosts": [], 00:16:21.358 "serial_number": "SPDK1", 00:16:21.358 "model_number": "SPDK bdev Controller", 00:16:21.358 "max_namespaces": 32, 00:16:21.358 "min_cntlid": 1, 00:16:21.358 "max_cntlid": 65519, 00:16:21.358 "namespaces": [ 00:16:21.358 { 00:16:21.358 "nsid": 1, 00:16:21.358 "bdev_name": "Malloc1", 00:16:21.358 "name": "Malloc1", 00:16:21.358 "nguid": "E3B177FC5FED438CA10280CCB1E6257B", 00:16:21.358 "uuid": "e3b177fc-5fed-438c-a102-80ccb1e6257b" 00:16:21.358 } 00:16:21.358 ] 00:16:21.358 }, 00:16:21.358 { 00:16:21.358 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:21.358 "subtype": "NVMe", 00:16:21.358 "listen_addresses": [ 00:16:21.358 { 00:16:21.358 "trtype": "VFIOUSER", 00:16:21.358 "adrfam": "IPv4", 00:16:21.358 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:21.358 "trsvcid": "0" 00:16:21.358 } 00:16:21.358 ], 00:16:21.358 "allow_any_host": true, 00:16:21.358 "hosts": [], 00:16:21.358 "serial_number": "SPDK2", 00:16:21.358 "model_number": "SPDK bdev Controller", 00:16:21.358 "max_namespaces": 32, 00:16:21.358 "min_cntlid": 1, 00:16:21.358 "max_cntlid": 65519, 00:16:21.358 "namespaces": [ 00:16:21.358 { 00:16:21.358 "nsid": 1, 00:16:21.358 "bdev_name": "Malloc2", 00:16:21.358 "name": "Malloc2", 00:16:21.358 "nguid": "3561A553ACF14ADD810690A8E16C2C6C", 00:16:21.358 "uuid": "3561a553-acf1-4add-8106-90a8e16c2c6c" 00:16:21.358 } 00:16:21.358 ] 00:16:21.358 } 00:16:21.358 ] 00:16:21.358 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:21.358 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1925945 00:16:21.358 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:21.358 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:21.358 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:21.358 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:21.358 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:21.358 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:21.358 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:21.358 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:21.619 [2024-11-28 08:15:18.646887] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:21.619 Malloc3 00:16:21.619 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:21.619 [2024-11-28 08:15:18.848308] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:21.619 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:21.619 Asynchronous Event Request test 00:16:21.619 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:21.619 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:21.619 Registering asynchronous event callbacks... 00:16:21.619 Starting namespace attribute notice tests for all controllers... 00:16:21.619 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:21.619 aer_cb - Changed Namespace 00:16:21.619 Cleaning up... 00:16:21.881 [ 00:16:21.881 { 00:16:21.881 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:21.881 "subtype": "Discovery", 00:16:21.881 "listen_addresses": [], 00:16:21.881 "allow_any_host": true, 00:16:21.881 "hosts": [] 00:16:21.881 }, 00:16:21.881 { 00:16:21.881 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:21.881 "subtype": "NVMe", 00:16:21.881 "listen_addresses": [ 00:16:21.881 { 00:16:21.881 "trtype": "VFIOUSER", 00:16:21.881 "adrfam": "IPv4", 00:16:21.881 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:21.881 "trsvcid": "0" 00:16:21.881 } 00:16:21.881 ], 00:16:21.881 "allow_any_host": true, 00:16:21.881 "hosts": [], 00:16:21.881 "serial_number": "SPDK1", 00:16:21.881 "model_number": "SPDK bdev Controller", 00:16:21.881 "max_namespaces": 32, 00:16:21.881 "min_cntlid": 1, 00:16:21.881 "max_cntlid": 65519, 00:16:21.881 "namespaces": [ 00:16:21.881 { 00:16:21.881 "nsid": 1, 00:16:21.881 "bdev_name": "Malloc1", 00:16:21.881 "name": "Malloc1", 00:16:21.881 "nguid": "E3B177FC5FED438CA10280CCB1E6257B", 00:16:21.881 "uuid": "e3b177fc-5fed-438c-a102-80ccb1e6257b" 00:16:21.881 }, 00:16:21.881 { 00:16:21.881 "nsid": 2, 00:16:21.881 "bdev_name": "Malloc3", 00:16:21.881 "name": "Malloc3", 00:16:21.881 "nguid": "EC58CF244BEF4FEA9394AC2AB2C25F0F", 00:16:21.881 "uuid": "ec58cf24-4bef-4fea-9394-ac2ab2c25f0f" 00:16:21.881 } 00:16:21.881 ] 00:16:21.881 }, 00:16:21.881 { 00:16:21.881 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:21.881 "subtype": "NVMe", 00:16:21.881 "listen_addresses": [ 00:16:21.881 { 00:16:21.881 "trtype": "VFIOUSER", 00:16:21.881 "adrfam": "IPv4", 00:16:21.881 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:21.881 "trsvcid": "0" 00:16:21.881 } 00:16:21.881 ], 00:16:21.881 "allow_any_host": true, 00:16:21.881 "hosts": [], 00:16:21.881 "serial_number": "SPDK2", 00:16:21.881 "model_number": "SPDK bdev Controller", 00:16:21.881 "max_namespaces": 32, 00:16:21.881 "min_cntlid": 1, 00:16:21.881 "max_cntlid": 65519, 00:16:21.881 "namespaces": [ 00:16:21.881 { 00:16:21.881 "nsid": 1, 00:16:21.881 "bdev_name": "Malloc2", 00:16:21.881 "name": "Malloc2", 00:16:21.881 "nguid": "3561A553ACF14ADD810690A8E16C2C6C", 00:16:21.881 "uuid": "3561a553-acf1-4add-8106-90a8e16c2c6c" 00:16:21.881 } 00:16:21.881 ] 00:16:21.881 } 00:16:21.881 ] 00:16:21.881 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1925945 00:16:21.881 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:21.881 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:21.881 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:21.881 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:21.881 [2024-11-28 08:15:19.086015] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:16:21.881 [2024-11-28 08:15:19.086056] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1926200 ] 00:16:21.881 [2024-11-28 08:15:19.126413] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:21.881 [2024-11-28 08:15:19.135369] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:21.881 [2024-11-28 08:15:19.135389] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f228739f000 00:16:21.881 [2024-11-28 08:15:19.136374] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:21.881 [2024-11-28 08:15:19.137396] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:21.881 [2024-11-28 08:15:19.138393] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:21.882 [2024-11-28 08:15:19.139395] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:21.882 [2024-11-28 08:15:19.140402] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:21.882 [2024-11-28 08:15:19.141407] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:21.882 [2024-11-28 08:15:19.142408] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:21.882 [2024-11-28 08:15:19.143417] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:21.882 [2024-11-28 08:15:19.144425] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:21.882 [2024-11-28 08:15:19.144433] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2287394000 00:16:21.882 [2024-11-28 08:15:19.145346] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:21.882 [2024-11-28 08:15:19.154733] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:21.882 [2024-11-28 08:15:19.154752] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:16:21.882 [2024-11-28 08:15:19.159818] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:21.882 [2024-11-28 08:15:19.159850] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:21.882 [2024-11-28 08:15:19.159915] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:16:21.882 [2024-11-28 08:15:19.159925] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:16:21.882 [2024-11-28 08:15:19.159929] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:16:21.882 [2024-11-28 08:15:19.160820] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:21.882 [2024-11-28 08:15:19.160829] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:16:21.882 [2024-11-28 08:15:19.160835] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:16:21.882 [2024-11-28 08:15:19.161830] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:21.882 [2024-11-28 08:15:19.161836] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:16:21.882 [2024-11-28 08:15:19.161842] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:21.882 [2024-11-28 08:15:19.162837] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:21.882 [2024-11-28 08:15:19.162844] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:21.882 [2024-11-28 08:15:19.163844] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:21.882 [2024-11-28 08:15:19.163851] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:21.882 [2024-11-28 08:15:19.163854] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:21.882 [2024-11-28 08:15:19.163859] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:21.882 [2024-11-28 08:15:19.163965] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:16:21.882 [2024-11-28 08:15:19.163968] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:21.882 [2024-11-28 08:15:19.163972] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:21.882 [2024-11-28 08:15:19.164850] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:21.882 [2024-11-28 08:15:19.165854] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:21.882 [2024-11-28 08:15:19.166863] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:21.882 [2024-11-28 08:15:19.167865] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:21.882 [2024-11-28 08:15:19.167897] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:22.146 [2024-11-28 08:15:19.168875] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:22.146 [2024-11-28 08:15:19.168884] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:22.146 [2024-11-28 08:15:19.168889] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:22.146 [2024-11-28 08:15:19.168904] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:16:22.146 [2024-11-28 08:15:19.168910] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:22.146 [2024-11-28 08:15:19.168921] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:22.146 [2024-11-28 08:15:19.168925] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:22.146 [2024-11-28 08:15:19.168927] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:22.146 [2024-11-28 08:15:19.168936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:22.146 [2024-11-28 08:15:19.176164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:22.146 [2024-11-28 08:15:19.176174] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:16:22.146 [2024-11-28 08:15:19.176177] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:16:22.146 [2024-11-28 08:15:19.176180] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:16:22.146 [2024-11-28 08:15:19.176184] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:22.146 [2024-11-28 08:15:19.176187] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:16:22.146 [2024-11-28 08:15:19.176190] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:16:22.146 [2024-11-28 08:15:19.176194] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:16:22.146 [2024-11-28 08:15:19.176199] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:22.146 [2024-11-28 08:15:19.176206] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:22.146 [2024-11-28 08:15:19.184163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:22.146 [2024-11-28 08:15:19.184173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.146 [2024-11-28 08:15:19.184179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.146 [2024-11-28 08:15:19.184185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.146 [2024-11-28 08:15:19.184191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.146 [2024-11-28 08:15:19.184195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:22.146 [2024-11-28 08:15:19.184202] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:22.146 [2024-11-28 08:15:19.184209] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:22.146 [2024-11-28 08:15:19.192164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:22.146 [2024-11-28 08:15:19.192170] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:16:22.146 [2024-11-28 08:15:19.192174] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:22.146 [2024-11-28 08:15:19.192180] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:16:22.146 [2024-11-28 08:15:19.192185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:22.146 [2024-11-28 08:15:19.192191] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:22.146 [2024-11-28 08:15:19.200164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:22.146 [2024-11-28 08:15:19.200212] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:16:22.146 [2024-11-28 08:15:19.200218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:22.146 [2024-11-28 08:15:19.200223] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:22.146 [2024-11-28 08:15:19.200227] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:22.146 [2024-11-28 08:15:19.200229] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:22.146 [2024-11-28 08:15:19.200234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:22.146 [2024-11-28 08:15:19.208164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:22.146 [2024-11-28 08:15:19.208175] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:16:22.146 [2024-11-28 08:15:19.208181] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:16:22.146 [2024-11-28 08:15:19.208186] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:22.146 [2024-11-28 08:15:19.208192] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:22.146 [2024-11-28 08:15:19.208195] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:22.146 [2024-11-28 08:15:19.208197] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:22.146 [2024-11-28 08:15:19.208202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:22.146 [2024-11-28 08:15:19.216164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:22.146 [2024-11-28 08:15:19.216173] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:22.146 [2024-11-28 08:15:19.216179] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:22.146 [2024-11-28 08:15:19.216184] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:22.146 [2024-11-28 08:15:19.216187] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:22.146 [2024-11-28 08:15:19.216191] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:22.146 [2024-11-28 08:15:19.216196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:22.146 [2024-11-28 08:15:19.224163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:22.146 [2024-11-28 08:15:19.224173] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:22.146 [2024-11-28 08:15:19.224178] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:22.146 [2024-11-28 08:15:19.224183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:16:22.146 [2024-11-28 08:15:19.224187] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:22.146 [2024-11-28 08:15:19.224191] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:22.146 [2024-11-28 08:15:19.224195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:16:22.146 [2024-11-28 08:15:19.224198] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:22.147 [2024-11-28 08:15:19.224202] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:16:22.147 [2024-11-28 08:15:19.224205] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:16:22.147 [2024-11-28 08:15:19.224218] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:22.147 [2024-11-28 08:15:19.232163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:22.147 [2024-11-28 08:15:19.232174] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:22.147 [2024-11-28 08:15:19.240164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:22.147 [2024-11-28 08:15:19.240174] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:22.147 [2024-11-28 08:15:19.248164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:22.147 [2024-11-28 08:15:19.248175] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:22.147 [2024-11-28 08:15:19.256165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:22.147 [2024-11-28 08:15:19.256178] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:22.147 [2024-11-28 08:15:19.256181] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:22.147 [2024-11-28 08:15:19.256184] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:22.147 [2024-11-28 08:15:19.256186] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:22.147 [2024-11-28 08:15:19.256189] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:22.147 [2024-11-28 08:15:19.256193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:22.147 [2024-11-28 08:15:19.256200] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:22.147 [2024-11-28 08:15:19.256204] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:22.147 [2024-11-28 08:15:19.256206] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:22.147 [2024-11-28 08:15:19.256210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:22.147 [2024-11-28 08:15:19.256215] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:22.147 [2024-11-28 08:15:19.256219] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:22.147 [2024-11-28 08:15:19.256221] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:22.147 [2024-11-28 08:15:19.256225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:22.147 [2024-11-28 08:15:19.256231] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:22.147 [2024-11-28 08:15:19.256234] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:22.147 [2024-11-28 08:15:19.256236] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:22.147 [2024-11-28 08:15:19.256240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:22.147 [2024-11-28 08:15:19.264164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:22.147 [2024-11-28 08:15:19.264175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:22.147 [2024-11-28 08:15:19.264183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:22.147 [2024-11-28 08:15:19.264188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:22.147 ===================================================== 00:16:22.147 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:22.147 ===================================================== 00:16:22.147 Controller Capabilities/Features 00:16:22.147 ================================ 00:16:22.147 Vendor ID: 4e58 00:16:22.147 Subsystem Vendor ID: 4e58 00:16:22.147 Serial Number: SPDK2 00:16:22.147 Model Number: SPDK bdev Controller 00:16:22.147 Firmware Version: 25.01 00:16:22.147 Recommended Arb Burst: 6 00:16:22.147 IEEE OUI Identifier: 8d 6b 50 00:16:22.147 Multi-path I/O 00:16:22.147 May have multiple subsystem ports: Yes 00:16:22.147 May have multiple controllers: Yes 00:16:22.147 Associated with SR-IOV VF: No 00:16:22.147 Max Data Transfer Size: 131072 00:16:22.147 Max Number of Namespaces: 32 00:16:22.147 Max Number of I/O Queues: 127 00:16:22.147 NVMe Specification Version (VS): 1.3 00:16:22.147 NVMe Specification Version (Identify): 1.3 00:16:22.147 Maximum Queue Entries: 256 00:16:22.147 Contiguous Queues Required: Yes 00:16:22.147 Arbitration Mechanisms Supported 00:16:22.147 Weighted Round Robin: Not Supported 00:16:22.147 Vendor Specific: Not Supported 00:16:22.147 Reset Timeout: 15000 ms 00:16:22.147 Doorbell Stride: 4 bytes 00:16:22.147 NVM Subsystem Reset: Not Supported 00:16:22.147 Command Sets Supported 00:16:22.147 NVM Command Set: Supported 00:16:22.147 Boot Partition: Not Supported 00:16:22.147 Memory Page Size Minimum: 4096 bytes 00:16:22.147 Memory Page Size Maximum: 4096 bytes 00:16:22.147 Persistent Memory Region: Not Supported 00:16:22.147 Optional Asynchronous Events Supported 00:16:22.147 Namespace Attribute Notices: Supported 00:16:22.147 Firmware Activation Notices: Not Supported 00:16:22.147 ANA Change Notices: Not Supported 00:16:22.147 PLE Aggregate Log Change Notices: Not Supported 00:16:22.147 LBA Status Info Alert Notices: Not Supported 00:16:22.147 EGE Aggregate Log Change Notices: Not Supported 00:16:22.147 Normal NVM Subsystem Shutdown event: Not Supported 00:16:22.147 Zone Descriptor Change Notices: Not Supported 00:16:22.147 Discovery Log Change Notices: Not Supported 00:16:22.147 Controller Attributes 00:16:22.147 128-bit Host Identifier: Supported 00:16:22.147 Non-Operational Permissive Mode: Not Supported 00:16:22.147 NVM Sets: Not Supported 00:16:22.147 Read Recovery Levels: Not Supported 00:16:22.147 Endurance Groups: Not Supported 00:16:22.147 Predictable Latency Mode: Not Supported 00:16:22.147 Traffic Based Keep ALive: Not Supported 00:16:22.147 Namespace Granularity: Not Supported 00:16:22.147 SQ Associations: Not Supported 00:16:22.147 UUID List: Not Supported 00:16:22.147 Multi-Domain Subsystem: Not Supported 00:16:22.147 Fixed Capacity Management: Not Supported 00:16:22.147 Variable Capacity Management: Not Supported 00:16:22.147 Delete Endurance Group: Not Supported 00:16:22.147 Delete NVM Set: Not Supported 00:16:22.147 Extended LBA Formats Supported: Not Supported 00:16:22.147 Flexible Data Placement Supported: Not Supported 00:16:22.147 00:16:22.147 Controller Memory Buffer Support 00:16:22.147 ================================ 00:16:22.147 Supported: No 00:16:22.147 00:16:22.147 Persistent Memory Region Support 00:16:22.147 ================================ 00:16:22.147 Supported: No 00:16:22.147 00:16:22.147 Admin Command Set Attributes 00:16:22.147 ============================ 00:16:22.147 Security Send/Receive: Not Supported 00:16:22.147 Format NVM: Not Supported 00:16:22.147 Firmware Activate/Download: Not Supported 00:16:22.147 Namespace Management: Not Supported 00:16:22.147 Device Self-Test: Not Supported 00:16:22.147 Directives: Not Supported 00:16:22.147 NVMe-MI: Not Supported 00:16:22.147 Virtualization Management: Not Supported 00:16:22.147 Doorbell Buffer Config: Not Supported 00:16:22.147 Get LBA Status Capability: Not Supported 00:16:22.147 Command & Feature Lockdown Capability: Not Supported 00:16:22.147 Abort Command Limit: 4 00:16:22.147 Async Event Request Limit: 4 00:16:22.147 Number of Firmware Slots: N/A 00:16:22.147 Firmware Slot 1 Read-Only: N/A 00:16:22.147 Firmware Activation Without Reset: N/A 00:16:22.147 Multiple Update Detection Support: N/A 00:16:22.147 Firmware Update Granularity: No Information Provided 00:16:22.147 Per-Namespace SMART Log: No 00:16:22.147 Asymmetric Namespace Access Log Page: Not Supported 00:16:22.147 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:22.147 Command Effects Log Page: Supported 00:16:22.147 Get Log Page Extended Data: Supported 00:16:22.147 Telemetry Log Pages: Not Supported 00:16:22.147 Persistent Event Log Pages: Not Supported 00:16:22.147 Supported Log Pages Log Page: May Support 00:16:22.147 Commands Supported & Effects Log Page: Not Supported 00:16:22.147 Feature Identifiers & Effects Log Page:May Support 00:16:22.147 NVMe-MI Commands & Effects Log Page: May Support 00:16:22.147 Data Area 4 for Telemetry Log: Not Supported 00:16:22.147 Error Log Page Entries Supported: 128 00:16:22.147 Keep Alive: Supported 00:16:22.147 Keep Alive Granularity: 10000 ms 00:16:22.147 00:16:22.147 NVM Command Set Attributes 00:16:22.147 ========================== 00:16:22.147 Submission Queue Entry Size 00:16:22.147 Max: 64 00:16:22.147 Min: 64 00:16:22.147 Completion Queue Entry Size 00:16:22.147 Max: 16 00:16:22.147 Min: 16 00:16:22.147 Number of Namespaces: 32 00:16:22.147 Compare Command: Supported 00:16:22.147 Write Uncorrectable Command: Not Supported 00:16:22.147 Dataset Management Command: Supported 00:16:22.148 Write Zeroes Command: Supported 00:16:22.148 Set Features Save Field: Not Supported 00:16:22.148 Reservations: Not Supported 00:16:22.148 Timestamp: Not Supported 00:16:22.148 Copy: Supported 00:16:22.148 Volatile Write Cache: Present 00:16:22.148 Atomic Write Unit (Normal): 1 00:16:22.148 Atomic Write Unit (PFail): 1 00:16:22.148 Atomic Compare & Write Unit: 1 00:16:22.148 Fused Compare & Write: Supported 00:16:22.148 Scatter-Gather List 00:16:22.148 SGL Command Set: Supported (Dword aligned) 00:16:22.148 SGL Keyed: Not Supported 00:16:22.148 SGL Bit Bucket Descriptor: Not Supported 00:16:22.148 SGL Metadata Pointer: Not Supported 00:16:22.148 Oversized SGL: Not Supported 00:16:22.148 SGL Metadata Address: Not Supported 00:16:22.148 SGL Offset: Not Supported 00:16:22.148 Transport SGL Data Block: Not Supported 00:16:22.148 Replay Protected Memory Block: Not Supported 00:16:22.148 00:16:22.148 Firmware Slot Information 00:16:22.148 ========================= 00:16:22.148 Active slot: 1 00:16:22.148 Slot 1 Firmware Revision: 25.01 00:16:22.148 00:16:22.148 00:16:22.148 Commands Supported and Effects 00:16:22.148 ============================== 00:16:22.148 Admin Commands 00:16:22.148 -------------- 00:16:22.148 Get Log Page (02h): Supported 00:16:22.148 Identify (06h): Supported 00:16:22.148 Abort (08h): Supported 00:16:22.148 Set Features (09h): Supported 00:16:22.148 Get Features (0Ah): Supported 00:16:22.148 Asynchronous Event Request (0Ch): Supported 00:16:22.148 Keep Alive (18h): Supported 00:16:22.148 I/O Commands 00:16:22.148 ------------ 00:16:22.148 Flush (00h): Supported LBA-Change 00:16:22.148 Write (01h): Supported LBA-Change 00:16:22.148 Read (02h): Supported 00:16:22.148 Compare (05h): Supported 00:16:22.148 Write Zeroes (08h): Supported LBA-Change 00:16:22.148 Dataset Management (09h): Supported LBA-Change 00:16:22.148 Copy (19h): Supported LBA-Change 00:16:22.148 00:16:22.148 Error Log 00:16:22.148 ========= 00:16:22.148 00:16:22.148 Arbitration 00:16:22.148 =========== 00:16:22.148 Arbitration Burst: 1 00:16:22.148 00:16:22.148 Power Management 00:16:22.148 ================ 00:16:22.148 Number of Power States: 1 00:16:22.148 Current Power State: Power State #0 00:16:22.148 Power State #0: 00:16:22.148 Max Power: 0.00 W 00:16:22.148 Non-Operational State: Operational 00:16:22.148 Entry Latency: Not Reported 00:16:22.148 Exit Latency: Not Reported 00:16:22.148 Relative Read Throughput: 0 00:16:22.148 Relative Read Latency: 0 00:16:22.148 Relative Write Throughput: 0 00:16:22.148 Relative Write Latency: 0 00:16:22.148 Idle Power: Not Reported 00:16:22.148 Active Power: Not Reported 00:16:22.148 Non-Operational Permissive Mode: Not Supported 00:16:22.148 00:16:22.148 Health Information 00:16:22.148 ================== 00:16:22.148 Critical Warnings: 00:16:22.148 Available Spare Space: OK 00:16:22.148 Temperature: OK 00:16:22.148 Device Reliability: OK 00:16:22.148 Read Only: No 00:16:22.148 Volatile Memory Backup: OK 00:16:22.148 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:22.148 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:22.148 Available Spare: 0% 00:16:22.148 Available Sp[2024-11-28 08:15:19.264260] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:22.148 [2024-11-28 08:15:19.272164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:22.148 [2024-11-28 08:15:19.272189] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:16:22.148 [2024-11-28 08:15:19.272196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.148 [2024-11-28 08:15:19.272200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.148 [2024-11-28 08:15:19.272205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.148 [2024-11-28 08:15:19.272209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.148 [2024-11-28 08:15:19.272241] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:22.148 [2024-11-28 08:15:19.272248] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:22.148 [2024-11-28 08:15:19.273252] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:22.148 [2024-11-28 08:15:19.273289] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:16:22.148 [2024-11-28 08:15:19.273294] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:16:22.148 [2024-11-28 08:15:19.274256] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:22.148 [2024-11-28 08:15:19.274266] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:16:22.148 [2024-11-28 08:15:19.274307] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:22.148 [2024-11-28 08:15:19.275272] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:22.148 are Threshold: 0% 00:16:22.148 Life Percentage Used: 0% 00:16:22.148 Data Units Read: 0 00:16:22.148 Data Units Written: 0 00:16:22.148 Host Read Commands: 0 00:16:22.148 Host Write Commands: 0 00:16:22.148 Controller Busy Time: 0 minutes 00:16:22.148 Power Cycles: 0 00:16:22.148 Power On Hours: 0 hours 00:16:22.148 Unsafe Shutdowns: 0 00:16:22.148 Unrecoverable Media Errors: 0 00:16:22.148 Lifetime Error Log Entries: 0 00:16:22.148 Warning Temperature Time: 0 minutes 00:16:22.148 Critical Temperature Time: 0 minutes 00:16:22.148 00:16:22.148 Number of Queues 00:16:22.148 ================ 00:16:22.148 Number of I/O Submission Queues: 127 00:16:22.148 Number of I/O Completion Queues: 127 00:16:22.148 00:16:22.148 Active Namespaces 00:16:22.148 ================= 00:16:22.148 Namespace ID:1 00:16:22.148 Error Recovery Timeout: Unlimited 00:16:22.148 Command Set Identifier: NVM (00h) 00:16:22.148 Deallocate: Supported 00:16:22.148 Deallocated/Unwritten Error: Not Supported 00:16:22.148 Deallocated Read Value: Unknown 00:16:22.148 Deallocate in Write Zeroes: Not Supported 00:16:22.148 Deallocated Guard Field: 0xFFFF 00:16:22.148 Flush: Supported 00:16:22.148 Reservation: Supported 00:16:22.148 Namespace Sharing Capabilities: Multiple Controllers 00:16:22.148 Size (in LBAs): 131072 (0GiB) 00:16:22.148 Capacity (in LBAs): 131072 (0GiB) 00:16:22.148 Utilization (in LBAs): 131072 (0GiB) 00:16:22.148 NGUID: 3561A553ACF14ADD810690A8E16C2C6C 00:16:22.148 UUID: 3561a553-acf1-4add-8106-90a8e16c2c6c 00:16:22.148 Thin Provisioning: Not Supported 00:16:22.148 Per-NS Atomic Units: Yes 00:16:22.148 Atomic Boundary Size (Normal): 0 00:16:22.148 Atomic Boundary Size (PFail): 0 00:16:22.148 Atomic Boundary Offset: 0 00:16:22.148 Maximum Single Source Range Length: 65535 00:16:22.148 Maximum Copy Length: 65535 00:16:22.148 Maximum Source Range Count: 1 00:16:22.148 NGUID/EUI64 Never Reused: No 00:16:22.148 Namespace Write Protected: No 00:16:22.148 Number of LBA Formats: 1 00:16:22.148 Current LBA Format: LBA Format #00 00:16:22.148 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:22.148 00:16:22.148 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:22.410 [2024-11-28 08:15:19.464188] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:27.701 Initializing NVMe Controllers 00:16:27.701 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:27.701 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:27.701 Initialization complete. Launching workers. 00:16:27.701 ======================================================== 00:16:27.701 Latency(us) 00:16:27.701 Device Information : IOPS MiB/s Average min max 00:16:27.701 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39993.60 156.22 3202.89 863.80 7768.51 00:16:27.701 ======================================================== 00:16:27.701 Total : 39993.60 156.22 3202.89 863.80 7768.51 00:16:27.701 00:16:27.701 [2024-11-28 08:15:24.573349] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:27.701 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:27.701 [2024-11-28 08:15:24.765968] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:32.983 Initializing NVMe Controllers 00:16:32.983 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:32.983 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:32.984 Initialization complete. Launching workers. 00:16:32.984 ======================================================== 00:16:32.984 Latency(us) 00:16:32.984 Device Information : IOPS MiB/s Average min max 00:16:32.984 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39975.41 156.15 3201.84 854.14 9748.72 00:16:32.984 ======================================================== 00:16:32.984 Total : 39975.41 156.15 3201.84 854.14 9748.72 00:16:32.984 00:16:32.984 [2024-11-28 08:15:29.782795] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:32.984 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:32.984 [2024-11-28 08:15:29.983525] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:38.269 [2024-11-28 08:15:35.117247] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:38.269 Initializing NVMe Controllers 00:16:38.269 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:38.269 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:38.269 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:38.269 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:38.269 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:38.269 Initialization complete. Launching workers. 00:16:38.269 Starting thread on core 2 00:16:38.269 Starting thread on core 3 00:16:38.269 Starting thread on core 1 00:16:38.269 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:38.269 [2024-11-28 08:15:35.365559] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:41.734 [2024-11-28 08:15:38.425299] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:41.734 Initializing NVMe Controllers 00:16:41.734 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:41.734 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:41.734 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:41.734 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:41.734 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:41.734 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:41.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:41.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:41.734 Initialization complete. Launching workers. 00:16:41.734 Starting thread on core 1 with urgent priority queue 00:16:41.734 Starting thread on core 2 with urgent priority queue 00:16:41.734 Starting thread on core 3 with urgent priority queue 00:16:41.734 Starting thread on core 0 with urgent priority queue 00:16:41.734 SPDK bdev Controller (SPDK2 ) core 0: 14560.00 IO/s 6.87 secs/100000 ios 00:16:41.734 SPDK bdev Controller (SPDK2 ) core 1: 12856.67 IO/s 7.78 secs/100000 ios 00:16:41.734 SPDK bdev Controller (SPDK2 ) core 2: 8005.00 IO/s 12.49 secs/100000 ios 00:16:41.734 SPDK bdev Controller (SPDK2 ) core 3: 12141.33 IO/s 8.24 secs/100000 ios 00:16:41.734 ======================================================== 00:16:41.734 00:16:41.734 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:41.734 [2024-11-28 08:15:38.659873] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:41.734 Initializing NVMe Controllers 00:16:41.734 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:41.734 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:41.735 Namespace ID: 1 size: 0GB 00:16:41.735 Initialization complete. 00:16:41.735 INFO: using host memory buffer for IO 00:16:41.735 Hello world! 00:16:41.735 [2024-11-28 08:15:38.669939] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:41.735 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:41.735 [2024-11-28 08:15:38.907536] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:43.124 Initializing NVMe Controllers 00:16:43.124 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:43.124 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:43.124 Initialization complete. Launching workers. 00:16:43.124 submit (in ns) avg, min, max = 7120.0, 2815.0, 4000112.5 00:16:43.124 complete (in ns) avg, min, max = 16749.6, 1629.2, 3998627.5 00:16:43.124 00:16:43.124 Submit histogram 00:16:43.124 ================ 00:16:43.124 Range in us Cumulative Count 00:16:43.124 2.813 - 2.827: 0.3814% ( 77) 00:16:43.124 2.827 - 2.840: 1.5799% ( 242) 00:16:43.124 2.840 - 2.853: 4.7150% ( 633) 00:16:43.124 2.853 - 2.867: 10.3115% ( 1130) 00:16:43.124 2.867 - 2.880: 16.6015% ( 1270) 00:16:43.124 2.880 - 2.893: 21.5542% ( 1000) 00:16:43.124 2.893 - 2.907: 27.2844% ( 1157) 00:16:43.124 2.907 - 2.920: 32.0985% ( 972) 00:16:43.124 2.920 - 2.933: 37.3533% ( 1061) 00:16:43.124 2.933 - 2.947: 42.2416% ( 987) 00:16:43.124 2.947 - 2.960: 47.3676% ( 1035) 00:16:43.124 2.960 - 2.973: 53.6378% ( 1266) 00:16:43.124 2.973 - 2.987: 62.2109% ( 1731) 00:16:43.124 2.987 - 3.000: 71.0069% ( 1776) 00:16:43.124 3.000 - 3.013: 79.3670% ( 1688) 00:16:43.124 3.013 - 3.027: 86.0581% ( 1351) 00:16:43.124 3.027 - 3.040: 91.4516% ( 1089) 00:16:43.124 3.040 - 3.053: 95.3197% ( 781) 00:16:43.124 3.053 - 3.067: 97.7019% ( 481) 00:16:43.124 3.067 - 3.080: 98.6479% ( 191) 00:16:43.124 3.080 - 3.093: 99.1481% ( 101) 00:16:43.124 3.093 - 3.107: 99.3760% ( 46) 00:16:43.124 3.107 - 3.120: 99.4552% ( 16) 00:16:43.124 3.120 - 3.133: 99.4899% ( 7) 00:16:43.124 3.133 - 3.147: 99.5196% ( 6) 00:16:43.124 3.160 - 3.173: 99.5245% ( 1) 00:16:43.124 3.173 - 3.187: 99.5295% ( 1) 00:16:43.124 3.187 - 3.200: 99.5344% ( 1) 00:16:43.124 3.200 - 3.213: 99.5394% ( 1) 00:16:43.124 3.240 - 3.253: 99.5444% ( 1) 00:16:43.124 3.253 - 3.267: 99.5493% ( 1) 00:16:43.124 3.267 - 3.280: 99.5543% ( 1) 00:16:43.124 3.293 - 3.307: 99.5592% ( 1) 00:16:43.124 3.333 - 3.347: 99.5642% ( 1) 00:16:43.124 3.347 - 3.360: 99.5691% ( 1) 00:16:43.124 3.373 - 3.387: 99.5790% ( 2) 00:16:43.124 3.760 - 3.787: 99.5840% ( 1) 00:16:43.124 3.813 - 3.840: 99.5889% ( 1) 00:16:43.124 3.973 - 4.000: 99.5939% ( 1) 00:16:43.124 4.267 - 4.293: 99.5988% ( 1) 00:16:43.124 4.347 - 4.373: 99.6038% ( 1) 00:16:43.124 4.373 - 4.400: 99.6137% ( 2) 00:16:43.124 4.427 - 4.453: 99.6186% ( 1) 00:16:43.124 4.507 - 4.533: 99.6236% ( 1) 00:16:43.124 4.587 - 4.613: 99.6335% ( 2) 00:16:43.124 4.693 - 4.720: 99.6385% ( 1) 00:16:43.124 4.747 - 4.773: 99.6434% ( 1) 00:16:43.124 4.773 - 4.800: 99.6533% ( 2) 00:16:43.124 4.827 - 4.853: 99.6583% ( 1) 00:16:43.124 4.880 - 4.907: 99.6682% ( 2) 00:16:43.124 4.907 - 4.933: 99.6731% ( 1) 00:16:43.124 4.933 - 4.960: 99.6781% ( 1) 00:16:43.124 4.960 - 4.987: 99.6830% ( 1) 00:16:43.124 4.987 - 5.013: 99.6880% ( 1) 00:16:43.124 5.013 - 5.040: 99.6929% ( 1) 00:16:43.124 5.040 - 5.067: 99.7028% ( 2) 00:16:43.124 5.227 - 5.253: 99.7078% ( 1) 00:16:43.124 5.360 - 5.387: 99.7127% ( 1) 00:16:43.124 5.413 - 5.440: 99.7226% ( 2) 00:16:43.124 5.467 - 5.493: 99.7276% ( 1) 00:16:43.124 5.680 - 5.707: 99.7326% ( 1) 00:16:43.124 5.787 - 5.813: 99.7375% ( 1) 00:16:43.124 5.813 - 5.840: 99.7474% ( 2) 00:16:43.124 5.867 - 5.893: 99.7672% ( 4) 00:16:43.124 5.920 - 5.947: 99.7722% ( 1) 00:16:43.124 5.973 - 6.000: 99.7771% ( 1) 00:16:43.124 6.080 - 6.107: 99.7821% ( 1) 00:16:43.124 6.160 - 6.187: 99.7870% ( 1) 00:16:43.124 6.187 - 6.213: 99.7969% ( 2) 00:16:43.124 6.427 - 6.453: 99.8019% ( 1) 00:16:43.124 6.453 - 6.480: 99.8118% ( 2) 00:16:43.124 6.640 - 6.667: 99.8168% ( 1) 00:16:43.124 6.693 - 6.720: 99.8217% ( 1) 00:16:43.124 6.747 - 6.773: 99.8267% ( 1) 00:16:43.124 6.773 - 6.800: 99.8316% ( 1) 00:16:43.124 6.827 - 6.880: 99.8415% ( 2) 00:16:43.124 6.880 - 6.933: 99.8465% ( 1) 00:16:43.124 6.987 - 7.040: 99.8514% ( 1) 00:16:43.124 7.040 - 7.093: 99.8564% ( 1) 00:16:43.124 [2024-11-28 08:15:39.998682] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:43.124 7.093 - 7.147: 99.8613% ( 1) 00:16:43.124 7.307 - 7.360: 99.8663% ( 1) 00:16:43.124 7.360 - 7.413: 99.8712% ( 1) 00:16:43.124 7.520 - 7.573: 99.8762% ( 1) 00:16:43.124 7.627 - 7.680: 99.8811% ( 1) 00:16:43.124 8.373 - 8.427: 99.8861% ( 1) 00:16:43.124 8.533 - 8.587: 99.8910% ( 1) 00:16:43.124 12.267 - 12.320: 99.8960% ( 1) 00:16:43.124 3986.773 - 4014.080: 100.0000% ( 21) 00:16:43.124 00:16:43.124 Complete histogram 00:16:43.124 ================== 00:16:43.124 Range in us Cumulative Count 00:16:43.124 1.627 - 1.633: 0.0050% ( 1) 00:16:43.124 1.633 - 1.640: 0.0099% ( 1) 00:16:43.124 1.640 - 1.647: 0.4903% ( 97) 00:16:43.124 1.647 - 1.653: 1.0747% ( 118) 00:16:43.124 1.653 - 1.660: 1.1441% ( 14) 00:16:43.124 1.660 - 1.667: 1.2580% ( 23) 00:16:43.124 1.667 - 1.673: 1.4214% ( 33) 00:16:43.124 1.673 - 1.680: 1.4610% ( 8) 00:16:43.124 1.680 - 1.687: 1.4710% ( 2) 00:16:43.124 1.687 - 1.693: 17.3790% ( 3212) 00:16:43.124 1.693 - 1.700: 42.4645% ( 5065) 00:16:43.124 1.700 - 1.707: 61.6760% ( 3879) 00:16:43.124 1.707 - 1.720: 78.1239% ( 3321) 00:16:43.124 1.720 - 1.733: 83.5917% ( 1104) 00:16:43.124 1.733 - 1.747: 84.7952% ( 243) 00:16:43.124 1.747 - 1.760: 87.6232% ( 571) 00:16:43.124 1.760 - 1.773: 92.6750% ( 1020) 00:16:43.124 1.773 - 1.787: 96.9838% ( 870) 00:16:43.124 1.787 - 1.800: 98.7272% ( 352) 00:16:43.124 1.800 - 1.813: 99.2819% ( 112) 00:16:43.124 1.813 - 1.827: 99.4205% ( 28) 00:16:43.124 1.827 - 1.840: 99.4503% ( 6) 00:16:43.124 1.853 - 1.867: 99.4552% ( 1) 00:16:43.124 1.880 - 1.893: 99.4602% ( 1) 00:16:43.124 3.360 - 3.373: 99.4651% ( 1) 00:16:43.124 3.387 - 3.400: 99.4701% ( 1) 00:16:43.124 3.653 - 3.680: 99.4750% ( 1) 00:16:43.124 3.813 - 3.840: 99.4800% ( 1) 00:16:43.124 4.347 - 4.373: 99.4849% ( 1) 00:16:43.124 4.400 - 4.427: 99.4899% ( 1) 00:16:43.124 4.773 - 4.800: 99.4948% ( 1) 00:16:43.124 4.827 - 4.853: 99.4998% ( 1) 00:16:43.124 4.880 - 4.907: 99.5047% ( 1) 00:16:43.124 4.907 - 4.933: 99.5097% ( 1) 00:16:43.124 4.933 - 4.960: 99.5146% ( 1) 00:16:43.124 4.987 - 5.013: 99.5196% ( 1) 00:16:43.125 5.147 - 5.173: 99.5245% ( 1) 00:16:43.125 5.173 - 5.200: 99.5394% ( 3) 00:16:43.125 5.307 - 5.333: 99.5493% ( 2) 00:16:43.125 5.413 - 5.440: 99.5543% ( 1) 00:16:43.125 5.520 - 5.547: 99.5592% ( 1) 00:16:43.125 5.547 - 5.573: 99.5642% ( 1) 00:16:43.125 5.627 - 5.653: 99.5691% ( 1) 00:16:43.125 5.680 - 5.707: 99.5741% ( 1) 00:16:43.125 5.733 - 5.760: 99.5790% ( 1) 00:16:43.125 5.893 - 5.920: 99.5889% ( 2) 00:16:43.125 6.160 - 6.187: 99.5939% ( 1) 00:16:43.125 10.827 - 10.880: 99.5988% ( 1) 00:16:43.125 11.573 - 11.627: 99.6038% ( 1) 00:16:43.125 12.000 - 12.053: 99.6087% ( 1) 00:16:43.125 12.267 - 12.320: 99.6186% ( 2) 00:16:43.125 33.067 - 33.280: 99.6236% ( 1) 00:16:43.125 3850.240 - 3877.547: 99.6285% ( 1) 00:16:43.125 3986.773 - 4014.080: 100.0000% ( 75) 00:16:43.125 00:16:43.125 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:43.125 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:43.125 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:43.125 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:43.125 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:43.125 [ 00:16:43.125 { 00:16:43.125 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:43.125 "subtype": "Discovery", 00:16:43.125 "listen_addresses": [], 00:16:43.125 "allow_any_host": true, 00:16:43.125 "hosts": [] 00:16:43.125 }, 00:16:43.125 { 00:16:43.125 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:43.125 "subtype": "NVMe", 00:16:43.125 "listen_addresses": [ 00:16:43.125 { 00:16:43.125 "trtype": "VFIOUSER", 00:16:43.125 "adrfam": "IPv4", 00:16:43.125 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:43.125 "trsvcid": "0" 00:16:43.125 } 00:16:43.125 ], 00:16:43.125 "allow_any_host": true, 00:16:43.125 "hosts": [], 00:16:43.125 "serial_number": "SPDK1", 00:16:43.125 "model_number": "SPDK bdev Controller", 00:16:43.125 "max_namespaces": 32, 00:16:43.125 "min_cntlid": 1, 00:16:43.125 "max_cntlid": 65519, 00:16:43.125 "namespaces": [ 00:16:43.125 { 00:16:43.125 "nsid": 1, 00:16:43.125 "bdev_name": "Malloc1", 00:16:43.125 "name": "Malloc1", 00:16:43.125 "nguid": "E3B177FC5FED438CA10280CCB1E6257B", 00:16:43.125 "uuid": "e3b177fc-5fed-438c-a102-80ccb1e6257b" 00:16:43.125 }, 00:16:43.125 { 00:16:43.125 "nsid": 2, 00:16:43.125 "bdev_name": "Malloc3", 00:16:43.125 "name": "Malloc3", 00:16:43.125 "nguid": "EC58CF244BEF4FEA9394AC2AB2C25F0F", 00:16:43.125 "uuid": "ec58cf24-4bef-4fea-9394-ac2ab2c25f0f" 00:16:43.125 } 00:16:43.125 ] 00:16:43.125 }, 00:16:43.125 { 00:16:43.125 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:43.125 "subtype": "NVMe", 00:16:43.125 "listen_addresses": [ 00:16:43.125 { 00:16:43.125 "trtype": "VFIOUSER", 00:16:43.125 "adrfam": "IPv4", 00:16:43.125 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:43.125 "trsvcid": "0" 00:16:43.125 } 00:16:43.125 ], 00:16:43.125 "allow_any_host": true, 00:16:43.125 "hosts": [], 00:16:43.125 "serial_number": "SPDK2", 00:16:43.125 "model_number": "SPDK bdev Controller", 00:16:43.125 "max_namespaces": 32, 00:16:43.125 "min_cntlid": 1, 00:16:43.125 "max_cntlid": 65519, 00:16:43.125 "namespaces": [ 00:16:43.125 { 00:16:43.125 "nsid": 1, 00:16:43.125 "bdev_name": "Malloc2", 00:16:43.125 "name": "Malloc2", 00:16:43.125 "nguid": "3561A553ACF14ADD810690A8E16C2C6C", 00:16:43.125 "uuid": "3561a553-acf1-4add-8106-90a8e16c2c6c" 00:16:43.125 } 00:16:43.125 ] 00:16:43.125 } 00:16:43.125 ] 00:16:43.125 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:43.125 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1930239 00:16:43.125 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:43.125 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:43.125 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:43.125 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:43.125 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:43.125 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:43.125 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:43.125 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:43.125 [2024-11-28 08:15:40.376547] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:43.125 Malloc4 00:16:43.387 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:43.387 [2024-11-28 08:15:40.570895] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:43.387 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:43.387 Asynchronous Event Request test 00:16:43.387 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:43.387 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:43.387 Registering asynchronous event callbacks... 00:16:43.387 Starting namespace attribute notice tests for all controllers... 00:16:43.387 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:43.387 aer_cb - Changed Namespace 00:16:43.387 Cleaning up... 00:16:43.649 [ 00:16:43.649 { 00:16:43.649 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:43.649 "subtype": "Discovery", 00:16:43.649 "listen_addresses": [], 00:16:43.649 "allow_any_host": true, 00:16:43.649 "hosts": [] 00:16:43.649 }, 00:16:43.649 { 00:16:43.649 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:43.649 "subtype": "NVMe", 00:16:43.649 "listen_addresses": [ 00:16:43.649 { 00:16:43.649 "trtype": "VFIOUSER", 00:16:43.649 "adrfam": "IPv4", 00:16:43.649 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:43.649 "trsvcid": "0" 00:16:43.649 } 00:16:43.649 ], 00:16:43.649 "allow_any_host": true, 00:16:43.649 "hosts": [], 00:16:43.649 "serial_number": "SPDK1", 00:16:43.649 "model_number": "SPDK bdev Controller", 00:16:43.649 "max_namespaces": 32, 00:16:43.649 "min_cntlid": 1, 00:16:43.649 "max_cntlid": 65519, 00:16:43.649 "namespaces": [ 00:16:43.649 { 00:16:43.649 "nsid": 1, 00:16:43.649 "bdev_name": "Malloc1", 00:16:43.649 "name": "Malloc1", 00:16:43.649 "nguid": "E3B177FC5FED438CA10280CCB1E6257B", 00:16:43.650 "uuid": "e3b177fc-5fed-438c-a102-80ccb1e6257b" 00:16:43.650 }, 00:16:43.650 { 00:16:43.650 "nsid": 2, 00:16:43.650 "bdev_name": "Malloc3", 00:16:43.650 "name": "Malloc3", 00:16:43.650 "nguid": "EC58CF244BEF4FEA9394AC2AB2C25F0F", 00:16:43.650 "uuid": "ec58cf24-4bef-4fea-9394-ac2ab2c25f0f" 00:16:43.650 } 00:16:43.650 ] 00:16:43.650 }, 00:16:43.650 { 00:16:43.650 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:43.650 "subtype": "NVMe", 00:16:43.650 "listen_addresses": [ 00:16:43.650 { 00:16:43.650 "trtype": "VFIOUSER", 00:16:43.650 "adrfam": "IPv4", 00:16:43.650 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:43.650 "trsvcid": "0" 00:16:43.650 } 00:16:43.650 ], 00:16:43.650 "allow_any_host": true, 00:16:43.650 "hosts": [], 00:16:43.650 "serial_number": "SPDK2", 00:16:43.650 "model_number": "SPDK bdev Controller", 00:16:43.650 "max_namespaces": 32, 00:16:43.650 "min_cntlid": 1, 00:16:43.650 "max_cntlid": 65519, 00:16:43.650 "namespaces": [ 00:16:43.650 { 00:16:43.650 "nsid": 1, 00:16:43.650 "bdev_name": "Malloc2", 00:16:43.650 "name": "Malloc2", 00:16:43.650 "nguid": "3561A553ACF14ADD810690A8E16C2C6C", 00:16:43.650 "uuid": "3561a553-acf1-4add-8106-90a8e16c2c6c" 00:16:43.650 }, 00:16:43.650 { 00:16:43.650 "nsid": 2, 00:16:43.650 "bdev_name": "Malloc4", 00:16:43.650 "name": "Malloc4", 00:16:43.650 "nguid": "2500C57C01BB4E77BEC5F7FDE04859CA", 00:16:43.650 "uuid": "2500c57c-01bb-4e77-bec5-f7fde04859ca" 00:16:43.650 } 00:16:43.650 ] 00:16:43.650 } 00:16:43.650 ] 00:16:43.650 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1930239 00:16:43.650 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:43.650 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1920583 00:16:43.650 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1920583 ']' 00:16:43.650 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1920583 00:16:43.650 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:43.650 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:43.650 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1920583 00:16:43.650 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:43.650 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:43.650 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1920583' 00:16:43.650 killing process with pid 1920583 00:16:43.650 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1920583 00:16:43.650 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1920583 00:16:43.912 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:43.912 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:43.912 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:43.912 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:43.912 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:43.912 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1930385 00:16:43.912 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1930385' 00:16:43.912 Process pid: 1930385 00:16:43.912 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:43.912 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:43.912 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1930385 00:16:43.912 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1930385 ']' 00:16:43.912 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.912 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:43.912 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.912 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:43.912 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:43.912 [2024-11-28 08:15:41.054749] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:43.912 [2024-11-28 08:15:41.055686] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:16:43.912 [2024-11-28 08:15:41.055730] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.912 [2024-11-28 08:15:41.139530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:43.912 [2024-11-28 08:15:41.174620] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:43.912 [2024-11-28 08:15:41.174658] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:43.912 [2024-11-28 08:15:41.174664] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:43.912 [2024-11-28 08:15:41.174669] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:43.912 [2024-11-28 08:15:41.174677] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:43.912 [2024-11-28 08:15:41.176199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.912 [2024-11-28 08:15:41.176291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:43.912 [2024-11-28 08:15:41.176425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.912 [2024-11-28 08:15:41.176427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:44.174 [2024-11-28 08:15:41.231037] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:44.174 [2024-11-28 08:15:41.231941] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:44.174 [2024-11-28 08:15:41.232854] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:44.174 [2024-11-28 08:15:41.233379] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:44.174 [2024-11-28 08:15:41.233408] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:44.745 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:44.745 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:16:44.745 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:45.689 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:45.949 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:45.949 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:45.949 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:45.949 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:45.949 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:46.210 Malloc1 00:16:46.210 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:46.210 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:46.470 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:46.731 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:46.731 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:46.731 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:46.731 Malloc2 00:16:46.991 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:46.991 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:47.251 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:47.512 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:47.512 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1930385 00:16:47.512 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1930385 ']' 00:16:47.512 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1930385 00:16:47.512 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:47.512 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:47.512 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1930385 00:16:47.512 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:47.512 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:47.512 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1930385' 00:16:47.512 killing process with pid 1930385 00:16:47.512 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1930385 00:16:47.512 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1930385 00:16:47.512 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:47.512 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:47.512 00:16:47.512 real 0m50.999s 00:16:47.512 user 3m14.897s 00:16:47.512 sys 0m3.188s 00:16:47.512 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:47.512 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:47.512 ************************************ 00:16:47.512 END TEST nvmf_vfio_user 00:16:47.512 ************************************ 00:16:47.773 08:15:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:47.773 08:15:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:47.773 08:15:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:47.773 08:15:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:47.773 ************************************ 00:16:47.773 START TEST nvmf_vfio_user_nvme_compliance 00:16:47.773 ************************************ 00:16:47.773 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:47.773 * Looking for test storage... 00:16:47.773 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:47.773 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:47.773 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:16:47.773 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:47.773 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:47.773 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:47.773 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:47.773 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:47.773 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:16:47.773 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:16:47.773 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:16:47.773 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:16:47.773 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:16:47.773 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:16:47.773 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:16:47.773 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:47.773 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:16:47.773 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:16:47.774 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:47.774 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:47.774 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:16:47.774 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:16:47.774 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:47.774 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:16:47.774 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:16:47.774 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:16:47.774 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:16:47.774 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:47.774 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:16:47.774 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:16:47.774 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:47.774 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:47.774 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:16:47.774 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:47.774 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:47.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.774 --rc genhtml_branch_coverage=1 00:16:47.774 --rc genhtml_function_coverage=1 00:16:47.774 --rc genhtml_legend=1 00:16:47.774 --rc geninfo_all_blocks=1 00:16:47.774 --rc geninfo_unexecuted_blocks=1 00:16:47.774 00:16:47.774 ' 00:16:47.774 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:47.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.774 --rc genhtml_branch_coverage=1 00:16:47.774 --rc genhtml_function_coverage=1 00:16:47.774 --rc genhtml_legend=1 00:16:47.774 --rc geninfo_all_blocks=1 00:16:47.774 --rc geninfo_unexecuted_blocks=1 00:16:47.774 00:16:47.774 ' 00:16:47.774 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:47.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.774 --rc genhtml_branch_coverage=1 00:16:47.774 --rc genhtml_function_coverage=1 00:16:47.774 --rc genhtml_legend=1 00:16:47.774 --rc geninfo_all_blocks=1 00:16:47.774 --rc geninfo_unexecuted_blocks=1 00:16:47.774 00:16:47.774 ' 00:16:47.774 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:47.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.774 --rc genhtml_branch_coverage=1 00:16:47.774 --rc genhtml_function_coverage=1 00:16:47.774 --rc genhtml_legend=1 00:16:47.774 --rc geninfo_all_blocks=1 00:16:47.774 --rc geninfo_unexecuted_blocks=1 00:16:47.774 00:16:47.774 ' 00:16:47.774 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:48.034 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:48.034 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:48.034 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:48.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1931331 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1931331' 00:16:48.035 Process pid: 1931331 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1931331 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 1931331 ']' 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:48.035 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:48.035 [2024-11-28 08:15:45.156405] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:16:48.035 [2024-11-28 08:15:45.156481] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.035 [2024-11-28 08:15:45.245875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:48.035 [2024-11-28 08:15:45.285247] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:48.035 [2024-11-28 08:15:45.285290] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:48.035 [2024-11-28 08:15:45.285296] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:48.035 [2024-11-28 08:15:45.285301] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:48.035 [2024-11-28 08:15:45.285306] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:48.035 [2024-11-28 08:15:45.286626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.035 [2024-11-28 08:15:45.286781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.035 [2024-11-28 08:15:45.286783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:48.976 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:48.976 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:16:48.976 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:49.920 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:49.920 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:49.920 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:49.920 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.920 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:49.920 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.920 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:49.920 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:49.920 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.920 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:49.920 malloc0 00:16:49.920 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.920 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:49.920 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.920 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:49.920 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.920 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:49.920 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.920 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:49.920 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.920 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:49.921 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.921 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:49.921 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.921 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:49.921 00:16:49.921 00:16:49.921 CUnit - A unit testing framework for C - Version 2.1-3 00:16:49.921 http://cunit.sourceforge.net/ 00:16:49.921 00:16:49.921 00:16:49.921 Suite: nvme_compliance 00:16:50.182 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-28 08:15:47.220532] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:50.182 [2024-11-28 08:15:47.221832] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:50.182 [2024-11-28 08:15:47.221844] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:50.182 [2024-11-28 08:15:47.221848] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:50.182 [2024-11-28 08:15:47.223547] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:50.182 passed 00:16:50.182 Test: admin_identify_ctrlr_verify_fused ...[2024-11-28 08:15:47.299019] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:50.182 [2024-11-28 08:15:47.302038] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:50.182 passed 00:16:50.182 Test: admin_identify_ns ...[2024-11-28 08:15:47.378613] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:50.182 [2024-11-28 08:15:47.438166] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:50.182 [2024-11-28 08:15:47.446165] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:50.182 [2024-11-28 08:15:47.467243] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:50.442 passed 00:16:50.442 Test: admin_get_features_mandatory_features ...[2024-11-28 08:15:47.540474] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:50.442 [2024-11-28 08:15:47.543486] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:50.442 passed 00:16:50.442 Test: admin_get_features_optional_features ...[2024-11-28 08:15:47.619936] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:50.442 [2024-11-28 08:15:47.622947] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:50.442 passed 00:16:50.442 Test: admin_set_features_number_of_queues ...[2024-11-28 08:15:47.698521] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:50.703 [2024-11-28 08:15:47.803256] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:50.703 passed 00:16:50.703 Test: admin_get_log_page_mandatory_logs ...[2024-11-28 08:15:47.878343] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:50.703 [2024-11-28 08:15:47.881362] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:50.703 passed 00:16:50.703 Test: admin_get_log_page_with_lpo ...[2024-11-28 08:15:47.958082] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:50.965 [2024-11-28 08:15:48.027167] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:50.965 [2024-11-28 08:15:48.040217] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:50.965 passed 00:16:50.965 Test: fabric_property_get ...[2024-11-28 08:15:48.114494] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:50.965 [2024-11-28 08:15:48.115692] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:50.965 [2024-11-28 08:15:48.117511] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:50.965 passed 00:16:50.965 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-28 08:15:48.192984] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:50.965 [2024-11-28 08:15:48.194188] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:50.965 [2024-11-28 08:15:48.196006] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:50.965 passed 00:16:51.226 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-28 08:15:48.270707] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:51.227 [2024-11-28 08:15:48.354170] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:51.227 [2024-11-28 08:15:48.370165] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:51.227 [2024-11-28 08:15:48.375248] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:51.227 passed 00:16:51.227 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-28 08:15:48.450306] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:51.227 [2024-11-28 08:15:48.451509] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:51.227 [2024-11-28 08:15:48.453325] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:51.227 passed 00:16:51.488 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-28 08:15:48.530069] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:51.488 [2024-11-28 08:15:48.607165] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:51.488 [2024-11-28 08:15:48.631167] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:51.488 [2024-11-28 08:15:48.636234] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:51.488 passed 00:16:51.488 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-28 08:15:48.709493] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:51.488 [2024-11-28 08:15:48.710702] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:51.488 [2024-11-28 08:15:48.710719] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:51.488 [2024-11-28 08:15:48.712513] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:51.488 passed 00:16:51.748 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-28 08:15:48.787536] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:51.748 [2024-11-28 08:15:48.883162] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:51.748 [2024-11-28 08:15:48.891165] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:51.748 [2024-11-28 08:15:48.899164] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:51.748 [2024-11-28 08:15:48.907167] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:51.748 [2024-11-28 08:15:48.936235] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:51.748 passed 00:16:51.748 Test: admin_create_io_sq_verify_pc ...[2024-11-28 08:15:49.008238] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:51.748 [2024-11-28 08:15:49.026169] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:52.008 [2024-11-28 08:15:49.043402] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:52.008 passed 00:16:52.008 Test: admin_create_io_qp_max_qps ...[2024-11-28 08:15:49.117865] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:52.947 [2024-11-28 08:15:50.222167] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:16:53.519 [2024-11-28 08:15:50.602078] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:53.519 passed 00:16:53.519 Test: admin_create_io_sq_shared_cq ...[2024-11-28 08:15:50.676897] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:53.780 [2024-11-28 08:15:50.809166] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:53.780 [2024-11-28 08:15:50.846207] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:53.780 passed 00:16:53.780 00:16:53.780 Run Summary: Type Total Ran Passed Failed Inactive 00:16:53.780 suites 1 1 n/a 0 0 00:16:53.780 tests 18 18 18 0 0 00:16:53.780 asserts 360 360 360 0 n/a 00:16:53.780 00:16:53.780 Elapsed time = 1.489 seconds 00:16:53.780 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1931331 00:16:53.780 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 1931331 ']' 00:16:53.780 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 1931331 00:16:53.780 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:16:53.780 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:53.780 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1931331 00:16:53.780 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:53.780 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:53.780 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1931331' 00:16:53.780 killing process with pid 1931331 00:16:53.780 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 1931331 00:16:53.780 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 1931331 00:16:53.780 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:54.041 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:54.041 00:16:54.041 real 0m6.213s 00:16:54.041 user 0m17.579s 00:16:54.041 sys 0m0.551s 00:16:54.041 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:54.041 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:54.041 ************************************ 00:16:54.041 END TEST nvmf_vfio_user_nvme_compliance 00:16:54.041 ************************************ 00:16:54.041 08:15:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:54.042 08:15:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:54.042 08:15:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:54.042 08:15:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:54.042 ************************************ 00:16:54.042 START TEST nvmf_vfio_user_fuzz 00:16:54.042 ************************************ 00:16:54.042 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:54.042 * Looking for test storage... 00:16:54.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:54.042 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:54.042 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:16:54.042 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:54.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.305 --rc genhtml_branch_coverage=1 00:16:54.305 --rc genhtml_function_coverage=1 00:16:54.305 --rc genhtml_legend=1 00:16:54.305 --rc geninfo_all_blocks=1 00:16:54.305 --rc geninfo_unexecuted_blocks=1 00:16:54.305 00:16:54.305 ' 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:54.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.305 --rc genhtml_branch_coverage=1 00:16:54.305 --rc genhtml_function_coverage=1 00:16:54.305 --rc genhtml_legend=1 00:16:54.305 --rc geninfo_all_blocks=1 00:16:54.305 --rc geninfo_unexecuted_blocks=1 00:16:54.305 00:16:54.305 ' 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:54.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.305 --rc genhtml_branch_coverage=1 00:16:54.305 --rc genhtml_function_coverage=1 00:16:54.305 --rc genhtml_legend=1 00:16:54.305 --rc geninfo_all_blocks=1 00:16:54.305 --rc geninfo_unexecuted_blocks=1 00:16:54.305 00:16:54.305 ' 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:54.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:54.305 --rc genhtml_branch_coverage=1 00:16:54.305 --rc genhtml_function_coverage=1 00:16:54.305 --rc genhtml_legend=1 00:16:54.305 --rc geninfo_all_blocks=1 00:16:54.305 --rc geninfo_unexecuted_blocks=1 00:16:54.305 00:16:54.305 ' 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:16:54.305 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:54.306 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:54.306 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:54.306 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.306 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.306 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.306 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:54.306 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.306 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:16:54.306 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:54.306 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:54.306 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:54.306 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:54.306 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:54.306 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:54.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:54.306 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:54.306 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:54.306 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:54.306 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:54.306 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:54.306 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:54.306 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:54.306 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:54.306 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:54.306 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:54.306 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1932511 00:16:54.306 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1932511' 00:16:54.306 Process pid: 1932511 00:16:54.306 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:54.306 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:54.306 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1932511 00:16:54.306 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1932511 ']' 00:16:54.306 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.306 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:54.306 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.306 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:54.306 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:55.249 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:55.249 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:16:55.249 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:56.192 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:56.192 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.192 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:56.192 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.192 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:56.192 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:56.192 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.192 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:56.192 malloc0 00:16:56.192 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.192 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:56.192 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.192 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:56.192 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.192 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:56.192 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.192 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:56.192 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.192 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:56.192 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.192 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:56.192 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.192 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:56.192 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:28.307 Fuzzing completed. Shutting down the fuzz application 00:17:28.307 00:17:28.307 Dumping successful admin opcodes: 00:17:28.307 9, 10, 00:17:28.307 Dumping successful io opcodes: 00:17:28.307 0, 00:17:28.307 NS: 0x20000081ef00 I/O qp, Total commands completed: 1436808, total successful commands: 5631, random_seed: 626865216 00:17:28.307 NS: 0x20000081ef00 admin qp, Total commands completed: 357440, total successful commands: 94, random_seed: 2291348672 00:17:28.307 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:28.307 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.307 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:28.307 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.307 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1932511 00:17:28.307 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1932511 ']' 00:17:28.307 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 1932511 00:17:28.307 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:17:28.307 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:28.307 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1932511 00:17:28.307 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:28.307 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:28.307 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1932511' 00:17:28.307 killing process with pid 1932511 00:17:28.307 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 1932511 00:17:28.307 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 1932511 00:17:28.307 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:28.307 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:28.307 00:17:28.307 real 0m32.843s 00:17:28.307 user 0m37.903s 00:17:28.307 sys 0m24.780s 00:17:28.307 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:28.307 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:28.307 ************************************ 00:17:28.307 END TEST nvmf_vfio_user_fuzz 00:17:28.307 ************************************ 00:17:28.307 08:16:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:28.307 08:16:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:28.307 08:16:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:28.307 08:16:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:28.307 ************************************ 00:17:28.307 START TEST nvmf_auth_target 00:17:28.307 ************************************ 00:17:28.307 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:28.307 * Looking for test storage... 00:17:28.307 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:28.307 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:28.307 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:17:28.307 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:28.307 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:28.307 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:28.307 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:28.307 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:28.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.308 --rc genhtml_branch_coverage=1 00:17:28.308 --rc genhtml_function_coverage=1 00:17:28.308 --rc genhtml_legend=1 00:17:28.308 --rc geninfo_all_blocks=1 00:17:28.308 --rc geninfo_unexecuted_blocks=1 00:17:28.308 00:17:28.308 ' 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:28.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.308 --rc genhtml_branch_coverage=1 00:17:28.308 --rc genhtml_function_coverage=1 00:17:28.308 --rc genhtml_legend=1 00:17:28.308 --rc geninfo_all_blocks=1 00:17:28.308 --rc geninfo_unexecuted_blocks=1 00:17:28.308 00:17:28.308 ' 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:28.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.308 --rc genhtml_branch_coverage=1 00:17:28.308 --rc genhtml_function_coverage=1 00:17:28.308 --rc genhtml_legend=1 00:17:28.308 --rc geninfo_all_blocks=1 00:17:28.308 --rc geninfo_unexecuted_blocks=1 00:17:28.308 00:17:28.308 ' 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:28.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.308 --rc genhtml_branch_coverage=1 00:17:28.308 --rc genhtml_function_coverage=1 00:17:28.308 --rc genhtml_legend=1 00:17:28.308 --rc geninfo_all_blocks=1 00:17:28.308 --rc geninfo_unexecuted_blocks=1 00:17:28.308 00:17:28.308 ' 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:28.308 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:17:28.308 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:28.309 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:28.309 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:28.309 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:28.309 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:28.309 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.309 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:28.309 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.309 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:28.309 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:28.309 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:17:28.309 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:34.902 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:34.902 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:34.902 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:34.902 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:34.902 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:34.903 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:34.903 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:34.903 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:34.903 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:34.903 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:34.903 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:34.903 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:34.903 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:34.903 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:34.903 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:34.903 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:34.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:34.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:17:34.903 00:17:34.903 --- 10.0.0.2 ping statistics --- 00:17:34.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.903 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:17:34.903 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:34.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:34.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:17:34.903 00:17:34.903 --- 10.0.0.1 ping statistics --- 00:17:34.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.903 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:17:34.903 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:34.903 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:17:34.903 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:34.903 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:34.903 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:34.903 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:34.903 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:34.903 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:34.903 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:34.903 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:17:34.903 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:34.903 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:34.903 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.903 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1942635 00:17:34.903 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1942635 00:17:34.903 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:34.903 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1942635 ']' 00:17:34.903 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.903 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:34.903 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.903 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:34.903 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.476 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:35.476 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:35.476 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:35.476 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:35.476 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.476 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.476 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1942742 00:17:35.477 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:35.477 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:35.477 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:17:35.477 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:35.477 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:35.477 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:35.477 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:17:35.477 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:35.477 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:35.477 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=579f3f23ede528c69cd606c2a029820571a0e79fe73091ac 00:17:35.477 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:35.477 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.n5R 00:17:35.477 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 579f3f23ede528c69cd606c2a029820571a0e79fe73091ac 0 00:17:35.477 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 579f3f23ede528c69cd606c2a029820571a0e79fe73091ac 0 00:17:35.477 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:35.477 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:35.477 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=579f3f23ede528c69cd606c2a029820571a0e79fe73091ac 00:17:35.477 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:17:35.477 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.n5R 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.n5R 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.n5R 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=51009dd27e10fa0dfc845e4ec297f83e1a242f8d85ee3f8a3efd2ce7a88aafa1 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.WML 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 51009dd27e10fa0dfc845e4ec297f83e1a242f8d85ee3f8a3efd2ce7a88aafa1 3 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 51009dd27e10fa0dfc845e4ec297f83e1a242f8d85ee3f8a3efd2ce7a88aafa1 3 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=51009dd27e10fa0dfc845e4ec297f83e1a242f8d85ee3f8a3efd2ce7a88aafa1 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.WML 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.WML 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.WML 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=154818694103e1d4075da71a39677490 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.f72 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 154818694103e1d4075da71a39677490 1 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 154818694103e1d4075da71a39677490 1 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=154818694103e1d4075da71a39677490 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.f72 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.f72 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.f72 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=162a155be8f3ba086b9949c92f393c6c0725d8a167c42efa 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.U19 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 162a155be8f3ba086b9949c92f393c6c0725d8a167c42efa 2 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 162a155be8f3ba086b9949c92f393c6c0725d8a167c42efa 2 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=162a155be8f3ba086b9949c92f393c6c0725d8a167c42efa 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.U19 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.U19 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.U19 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:35.739 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:35.739 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:35.739 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:35.739 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:35.739 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=096cb0379c2b7f224a2e234230728fe24ec961cde2e16d44 00:17:35.739 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:35.739 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.vdN 00:17:35.739 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 096cb0379c2b7f224a2e234230728fe24ec961cde2e16d44 2 00:17:35.739 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 096cb0379c2b7f224a2e234230728fe24ec961cde2e16d44 2 00:17:35.739 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:35.740 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:35.740 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=096cb0379c2b7f224a2e234230728fe24ec961cde2e16d44 00:17:35.740 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:35.740 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.vdN 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.vdN 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.vdN 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9c0b926c52ca46f4666b75a18e59e96a 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.6Vp 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9c0b926c52ca46f4666b75a18e59e96a 1 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9c0b926c52ca46f4666b75a18e59e96a 1 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9c0b926c52ca46f4666b75a18e59e96a 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.6Vp 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.6Vp 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.6Vp 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cc4804f695ea33d46e07cb81911046675650c24f2493115a89e6fd6993f7841d 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.5UZ 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cc4804f695ea33d46e07cb81911046675650c24f2493115a89e6fd6993f7841d 3 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cc4804f695ea33d46e07cb81911046675650c24f2493115a89e6fd6993f7841d 3 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cc4804f695ea33d46e07cb81911046675650c24f2493115a89e6fd6993f7841d 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.5UZ 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.5UZ 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.5UZ 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1942635 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1942635 ']' 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:36.002 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.263 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:36.263 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:36.263 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1942742 /var/tmp/host.sock 00:17:36.263 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1942742 ']' 00:17:36.263 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:36.263 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:36.264 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:36.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:36.264 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:36.264 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.525 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:36.525 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:36.525 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:17:36.525 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.525 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.525 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.525 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:36.525 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.n5R 00:17:36.525 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.525 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.525 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.525 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.n5R 00:17:36.525 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.n5R 00:17:36.786 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.WML ]] 00:17:36.786 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.WML 00:17:36.786 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.786 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.786 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.786 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.WML 00:17:36.786 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.WML 00:17:36.786 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:36.786 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.f72 00:17:36.786 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.786 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.786 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.786 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.f72 00:17:36.786 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.f72 00:17:37.046 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.U19 ]] 00:17:37.046 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.U19 00:17:37.046 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.046 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.046 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.046 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.U19 00:17:37.046 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.U19 00:17:37.307 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:37.307 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.vdN 00:17:37.307 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.307 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.307 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.307 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.vdN 00:17:37.307 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.vdN 00:17:37.569 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.6Vp ]] 00:17:37.569 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6Vp 00:17:37.569 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.569 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.569 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.569 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6Vp 00:17:37.569 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6Vp 00:17:37.830 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:37.830 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.5UZ 00:17:37.830 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.830 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.830 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.830 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.5UZ 00:17:37.830 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.5UZ 00:17:37.830 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:17:37.830 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:37.830 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:37.830 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.830 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:37.830 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:38.090 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:17:38.090 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.090 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:38.090 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:38.090 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:38.090 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.090 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.090 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.090 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.090 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.090 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.090 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.090 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.352 00:17:38.352 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.352 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.352 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.613 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.613 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.613 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.613 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.613 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.613 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.613 { 00:17:38.613 "cntlid": 1, 00:17:38.613 "qid": 0, 00:17:38.613 "state": "enabled", 00:17:38.613 "thread": "nvmf_tgt_poll_group_000", 00:17:38.613 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:38.613 "listen_address": { 00:17:38.613 "trtype": "TCP", 00:17:38.613 "adrfam": "IPv4", 00:17:38.613 "traddr": "10.0.0.2", 00:17:38.613 "trsvcid": "4420" 00:17:38.613 }, 00:17:38.613 "peer_address": { 00:17:38.613 "trtype": "TCP", 00:17:38.613 "adrfam": "IPv4", 00:17:38.613 "traddr": "10.0.0.1", 00:17:38.613 "trsvcid": "53434" 00:17:38.613 }, 00:17:38.613 "auth": { 00:17:38.613 "state": "completed", 00:17:38.613 "digest": "sha256", 00:17:38.613 "dhgroup": "null" 00:17:38.613 } 00:17:38.613 } 00:17:38.613 ]' 00:17:38.613 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.613 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:38.613 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.613 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:38.613 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.613 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.613 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.613 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.873 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:17:38.873 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:17:39.444 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.444 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:39.444 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.444 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.705 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.705 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.705 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:39.705 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:39.705 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:17:39.705 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.705 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:39.705 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:39.705 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:39.705 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.705 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.705 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.705 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.705 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.705 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.705 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.705 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.965 00:17:39.965 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.965 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.965 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.225 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.225 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.226 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.226 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.226 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.226 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.226 { 00:17:40.226 "cntlid": 3, 00:17:40.226 "qid": 0, 00:17:40.226 "state": "enabled", 00:17:40.226 "thread": "nvmf_tgt_poll_group_000", 00:17:40.226 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:40.226 "listen_address": { 00:17:40.226 "trtype": "TCP", 00:17:40.226 "adrfam": "IPv4", 00:17:40.226 "traddr": "10.0.0.2", 00:17:40.226 "trsvcid": "4420" 00:17:40.226 }, 00:17:40.226 "peer_address": { 00:17:40.226 "trtype": "TCP", 00:17:40.226 "adrfam": "IPv4", 00:17:40.226 "traddr": "10.0.0.1", 00:17:40.226 "trsvcid": "53452" 00:17:40.226 }, 00:17:40.226 "auth": { 00:17:40.226 "state": "completed", 00:17:40.226 "digest": "sha256", 00:17:40.226 "dhgroup": "null" 00:17:40.226 } 00:17:40.226 } 00:17:40.226 ]' 00:17:40.226 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.226 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:40.226 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.226 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:40.226 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.226 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.226 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.226 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.487 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: --dhchap-ctrl-secret DHHC-1:02:MTYyYTE1NWJlOGYzYmEwODZiOTk0OWM5MmYzOTNjNmMwNzI1ZDhhMTY3YzQyZWZhttwu1g==: 00:17:40.487 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: --dhchap-ctrl-secret DHHC-1:02:MTYyYTE1NWJlOGYzYmEwODZiOTk0OWM5MmYzOTNjNmMwNzI1ZDhhMTY3YzQyZWZhttwu1g==: 00:17:41.058 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.058 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:41.058 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.058 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.058 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.058 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.058 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:41.058 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:41.318 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:17:41.318 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.318 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:41.318 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:41.318 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:41.318 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.318 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.318 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.318 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.318 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.318 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.318 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.318 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.580 00:17:41.580 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.580 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.580 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.841 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.841 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.841 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.841 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.841 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.841 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.841 { 00:17:41.841 "cntlid": 5, 00:17:41.841 "qid": 0, 00:17:41.841 "state": "enabled", 00:17:41.841 "thread": "nvmf_tgt_poll_group_000", 00:17:41.841 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:41.841 "listen_address": { 00:17:41.841 "trtype": "TCP", 00:17:41.841 "adrfam": "IPv4", 00:17:41.841 "traddr": "10.0.0.2", 00:17:41.841 "trsvcid": "4420" 00:17:41.841 }, 00:17:41.841 "peer_address": { 00:17:41.841 "trtype": "TCP", 00:17:41.841 "adrfam": "IPv4", 00:17:41.841 "traddr": "10.0.0.1", 00:17:41.841 "trsvcid": "53472" 00:17:41.841 }, 00:17:41.841 "auth": { 00:17:41.841 "state": "completed", 00:17:41.841 "digest": "sha256", 00:17:41.841 "dhgroup": "null" 00:17:41.841 } 00:17:41.841 } 00:17:41.841 ]' 00:17:41.841 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.841 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:41.841 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.841 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:41.841 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.841 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.841 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.841 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.103 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:01:OWMwYjkyNmM1MmNhNDZmNDY2NmI3NWExOGU1OWU5NmFFWTPp: 00:17:42.103 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:01:OWMwYjkyNmM1MmNhNDZmNDY2NmI3NWExOGU1OWU5NmFFWTPp: 00:17:42.675 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.675 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:42.675 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.675 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.675 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.675 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.675 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:42.675 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:42.937 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:17:42.937 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.937 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:42.937 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:42.937 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:42.937 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.937 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:42.937 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.937 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.937 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.937 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:42.937 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.937 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.198 00:17:43.198 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.198 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.198 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.460 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.460 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.460 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.460 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.460 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.460 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.460 { 00:17:43.460 "cntlid": 7, 00:17:43.460 "qid": 0, 00:17:43.460 "state": "enabled", 00:17:43.460 "thread": "nvmf_tgt_poll_group_000", 00:17:43.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:43.460 "listen_address": { 00:17:43.460 "trtype": "TCP", 00:17:43.460 "adrfam": "IPv4", 00:17:43.460 "traddr": "10.0.0.2", 00:17:43.460 "trsvcid": "4420" 00:17:43.460 }, 00:17:43.460 "peer_address": { 00:17:43.460 "trtype": "TCP", 00:17:43.460 "adrfam": "IPv4", 00:17:43.460 "traddr": "10.0.0.1", 00:17:43.460 "trsvcid": "53488" 00:17:43.460 }, 00:17:43.460 "auth": { 00:17:43.460 "state": "completed", 00:17:43.460 "digest": "sha256", 00:17:43.460 "dhgroup": "null" 00:17:43.460 } 00:17:43.460 } 00:17:43.460 ]' 00:17:43.460 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.460 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:43.460 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.460 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:43.460 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.460 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.460 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.460 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.721 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:17:43.721 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:17:44.292 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.292 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:44.292 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.292 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.292 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.292 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:44.292 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.292 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:44.292 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:44.554 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:17:44.554 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.554 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:44.554 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:44.554 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:44.554 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.554 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.554 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.554 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.554 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.554 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.554 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.554 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.815 00:17:44.815 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.815 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.815 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.815 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.815 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.815 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.815 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.815 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.815 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.815 { 00:17:44.815 "cntlid": 9, 00:17:44.815 "qid": 0, 00:17:44.815 "state": "enabled", 00:17:44.815 "thread": "nvmf_tgt_poll_group_000", 00:17:44.815 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:44.815 "listen_address": { 00:17:44.815 "trtype": "TCP", 00:17:44.815 "adrfam": "IPv4", 00:17:44.815 "traddr": "10.0.0.2", 00:17:44.815 "trsvcid": "4420" 00:17:44.815 }, 00:17:44.815 "peer_address": { 00:17:44.815 "trtype": "TCP", 00:17:44.815 "adrfam": "IPv4", 00:17:44.815 "traddr": "10.0.0.1", 00:17:44.815 "trsvcid": "53510" 00:17:44.815 }, 00:17:44.815 "auth": { 00:17:44.815 "state": "completed", 00:17:44.815 "digest": "sha256", 00:17:44.815 "dhgroup": "ffdhe2048" 00:17:44.815 } 00:17:44.815 } 00:17:44.815 ]' 00:17:44.815 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.077 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:45.077 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.077 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:45.077 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.077 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.077 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.077 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.077 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:17:45.077 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:17:46.019 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.019 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:46.019 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.019 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.019 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.019 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.019 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:46.019 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:46.019 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:17:46.019 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.019 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:46.019 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:46.019 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:46.019 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.019 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.019 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.019 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.019 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.019 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.019 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.019 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.281 00:17:46.281 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.281 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.281 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.542 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.542 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.542 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.542 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.542 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.542 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.542 { 00:17:46.542 "cntlid": 11, 00:17:46.542 "qid": 0, 00:17:46.542 "state": "enabled", 00:17:46.542 "thread": "nvmf_tgt_poll_group_000", 00:17:46.542 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:46.542 "listen_address": { 00:17:46.542 "trtype": "TCP", 00:17:46.542 "adrfam": "IPv4", 00:17:46.542 "traddr": "10.0.0.2", 00:17:46.542 "trsvcid": "4420" 00:17:46.542 }, 00:17:46.542 "peer_address": { 00:17:46.542 "trtype": "TCP", 00:17:46.542 "adrfam": "IPv4", 00:17:46.542 "traddr": "10.0.0.1", 00:17:46.542 "trsvcid": "53546" 00:17:46.542 }, 00:17:46.542 "auth": { 00:17:46.542 "state": "completed", 00:17:46.542 "digest": "sha256", 00:17:46.542 "dhgroup": "ffdhe2048" 00:17:46.542 } 00:17:46.542 } 00:17:46.542 ]' 00:17:46.542 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.542 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:46.542 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.542 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:46.542 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.542 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.542 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.542 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.803 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: --dhchap-ctrl-secret DHHC-1:02:MTYyYTE1NWJlOGYzYmEwODZiOTk0OWM5MmYzOTNjNmMwNzI1ZDhhMTY3YzQyZWZhttwu1g==: 00:17:46.803 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: --dhchap-ctrl-secret DHHC-1:02:MTYyYTE1NWJlOGYzYmEwODZiOTk0OWM5MmYzOTNjNmMwNzI1ZDhhMTY3YzQyZWZhttwu1g==: 00:17:47.374 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.375 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:47.375 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.375 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.375 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.375 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.375 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:47.375 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:47.635 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:17:47.635 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.635 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:47.635 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:47.635 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:47.635 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.635 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.635 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.635 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.635 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.635 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.635 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.635 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.901 00:17:47.901 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.901 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.901 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.162 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.162 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.162 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.162 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.162 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.162 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.162 { 00:17:48.162 "cntlid": 13, 00:17:48.162 "qid": 0, 00:17:48.162 "state": "enabled", 00:17:48.162 "thread": "nvmf_tgt_poll_group_000", 00:17:48.162 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:48.162 "listen_address": { 00:17:48.162 "trtype": "TCP", 00:17:48.162 "adrfam": "IPv4", 00:17:48.162 "traddr": "10.0.0.2", 00:17:48.162 "trsvcid": "4420" 00:17:48.162 }, 00:17:48.162 "peer_address": { 00:17:48.162 "trtype": "TCP", 00:17:48.162 "adrfam": "IPv4", 00:17:48.162 "traddr": "10.0.0.1", 00:17:48.162 "trsvcid": "45742" 00:17:48.162 }, 00:17:48.162 "auth": { 00:17:48.162 "state": "completed", 00:17:48.162 "digest": "sha256", 00:17:48.162 "dhgroup": "ffdhe2048" 00:17:48.162 } 00:17:48.162 } 00:17:48.162 ]' 00:17:48.162 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.163 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:48.163 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.163 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:48.163 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.163 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.163 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.163 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.423 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:01:OWMwYjkyNmM1MmNhNDZmNDY2NmI3NWExOGU1OWU5NmFFWTPp: 00:17:48.423 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:01:OWMwYjkyNmM1MmNhNDZmNDY2NmI3NWExOGU1OWU5NmFFWTPp: 00:17:48.995 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.995 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:48.995 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.995 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.995 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.995 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.995 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:48.995 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:49.256 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:17:49.256 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.256 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:49.256 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:49.256 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:49.256 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.256 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:49.256 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.256 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.256 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.256 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:49.256 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:49.256 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:49.517 00:17:49.517 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.517 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.517 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.777 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.777 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.777 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.777 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.777 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.777 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.777 { 00:17:49.777 "cntlid": 15, 00:17:49.777 "qid": 0, 00:17:49.777 "state": "enabled", 00:17:49.777 "thread": "nvmf_tgt_poll_group_000", 00:17:49.777 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:49.777 "listen_address": { 00:17:49.777 "trtype": "TCP", 00:17:49.777 "adrfam": "IPv4", 00:17:49.777 "traddr": "10.0.0.2", 00:17:49.777 "trsvcid": "4420" 00:17:49.777 }, 00:17:49.777 "peer_address": { 00:17:49.777 "trtype": "TCP", 00:17:49.777 "adrfam": "IPv4", 00:17:49.777 "traddr": "10.0.0.1", 00:17:49.777 "trsvcid": "45764" 00:17:49.777 }, 00:17:49.777 "auth": { 00:17:49.777 "state": "completed", 00:17:49.777 "digest": "sha256", 00:17:49.777 "dhgroup": "ffdhe2048" 00:17:49.777 } 00:17:49.777 } 00:17:49.777 ]' 00:17:49.777 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.777 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:49.777 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.777 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:49.777 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.777 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.777 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.777 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.037 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:17:50.037 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:17:50.643 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.643 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:50.643 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.643 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.643 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.643 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.643 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.643 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:50.643 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:50.975 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:17:50.975 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.975 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:50.975 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:50.975 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:50.975 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.975 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.975 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.975 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.975 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.975 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.975 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.975 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.235 00:17:51.235 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.235 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.236 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.496 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.496 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.496 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.496 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.496 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.496 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.496 { 00:17:51.496 "cntlid": 17, 00:17:51.496 "qid": 0, 00:17:51.496 "state": "enabled", 00:17:51.496 "thread": "nvmf_tgt_poll_group_000", 00:17:51.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:51.496 "listen_address": { 00:17:51.496 "trtype": "TCP", 00:17:51.496 "adrfam": "IPv4", 00:17:51.496 "traddr": "10.0.0.2", 00:17:51.496 "trsvcid": "4420" 00:17:51.496 }, 00:17:51.496 "peer_address": { 00:17:51.496 "trtype": "TCP", 00:17:51.496 "adrfam": "IPv4", 00:17:51.496 "traddr": "10.0.0.1", 00:17:51.496 "trsvcid": "45778" 00:17:51.496 }, 00:17:51.496 "auth": { 00:17:51.496 "state": "completed", 00:17:51.496 "digest": "sha256", 00:17:51.496 "dhgroup": "ffdhe3072" 00:17:51.496 } 00:17:51.496 } 00:17:51.496 ]' 00:17:51.496 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.496 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:51.496 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.496 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:51.496 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.496 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.496 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.496 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.757 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:17:51.758 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:17:52.328 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.328 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:52.328 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.328 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.328 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.328 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.328 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:52.328 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:52.589 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:17:52.589 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.589 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:52.589 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:52.589 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:52.589 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.589 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.589 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.589 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.589 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.589 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.589 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.589 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.849 00:17:52.849 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.849 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.849 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.109 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.109 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.109 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.109 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.109 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.109 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.109 { 00:17:53.109 "cntlid": 19, 00:17:53.109 "qid": 0, 00:17:53.109 "state": "enabled", 00:17:53.109 "thread": "nvmf_tgt_poll_group_000", 00:17:53.109 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:53.109 "listen_address": { 00:17:53.109 "trtype": "TCP", 00:17:53.109 "adrfam": "IPv4", 00:17:53.109 "traddr": "10.0.0.2", 00:17:53.109 "trsvcid": "4420" 00:17:53.109 }, 00:17:53.109 "peer_address": { 00:17:53.109 "trtype": "TCP", 00:17:53.109 "adrfam": "IPv4", 00:17:53.109 "traddr": "10.0.0.1", 00:17:53.109 "trsvcid": "45814" 00:17:53.109 }, 00:17:53.109 "auth": { 00:17:53.109 "state": "completed", 00:17:53.109 "digest": "sha256", 00:17:53.109 "dhgroup": "ffdhe3072" 00:17:53.109 } 00:17:53.109 } 00:17:53.109 ]' 00:17:53.109 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.109 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:53.109 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.109 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:53.109 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.109 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.109 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.109 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.369 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: --dhchap-ctrl-secret DHHC-1:02:MTYyYTE1NWJlOGYzYmEwODZiOTk0OWM5MmYzOTNjNmMwNzI1ZDhhMTY3YzQyZWZhttwu1g==: 00:17:53.369 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: --dhchap-ctrl-secret DHHC-1:02:MTYyYTE1NWJlOGYzYmEwODZiOTk0OWM5MmYzOTNjNmMwNzI1ZDhhMTY3YzQyZWZhttwu1g==: 00:17:53.938 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.938 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:53.938 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.938 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.938 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.938 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.938 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:53.938 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:54.200 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:17:54.200 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.200 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:54.200 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:54.200 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:54.200 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.200 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.200 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.200 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.200 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.200 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.200 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.200 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.460 00:17:54.460 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.460 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.460 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.721 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.721 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.721 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.721 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.721 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.721 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.721 { 00:17:54.721 "cntlid": 21, 00:17:54.721 "qid": 0, 00:17:54.721 "state": "enabled", 00:17:54.721 "thread": "nvmf_tgt_poll_group_000", 00:17:54.721 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:54.721 "listen_address": { 00:17:54.721 "trtype": "TCP", 00:17:54.721 "adrfam": "IPv4", 00:17:54.721 "traddr": "10.0.0.2", 00:17:54.721 "trsvcid": "4420" 00:17:54.721 }, 00:17:54.721 "peer_address": { 00:17:54.721 "trtype": "TCP", 00:17:54.721 "adrfam": "IPv4", 00:17:54.721 "traddr": "10.0.0.1", 00:17:54.721 "trsvcid": "45836" 00:17:54.721 }, 00:17:54.721 "auth": { 00:17:54.721 "state": "completed", 00:17:54.721 "digest": "sha256", 00:17:54.721 "dhgroup": "ffdhe3072" 00:17:54.721 } 00:17:54.721 } 00:17:54.721 ]' 00:17:54.721 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.721 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:54.721 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.721 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:54.721 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.721 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.721 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.721 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.982 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:01:OWMwYjkyNmM1MmNhNDZmNDY2NmI3NWExOGU1OWU5NmFFWTPp: 00:17:54.982 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:01:OWMwYjkyNmM1MmNhNDZmNDY2NmI3NWExOGU1OWU5NmFFWTPp: 00:17:55.553 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.553 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:55.553 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.553 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.553 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.553 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.553 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:55.553 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:55.813 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:17:55.813 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.813 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:55.813 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:55.813 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:55.813 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.813 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:55.813 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.813 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.813 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.813 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:55.813 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:55.813 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:56.075 00:17:56.075 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.075 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.075 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.335 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.335 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.335 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.335 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.335 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.335 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.335 { 00:17:56.335 "cntlid": 23, 00:17:56.335 "qid": 0, 00:17:56.335 "state": "enabled", 00:17:56.335 "thread": "nvmf_tgt_poll_group_000", 00:17:56.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:56.335 "listen_address": { 00:17:56.335 "trtype": "TCP", 00:17:56.335 "adrfam": "IPv4", 00:17:56.335 "traddr": "10.0.0.2", 00:17:56.335 "trsvcid": "4420" 00:17:56.335 }, 00:17:56.335 "peer_address": { 00:17:56.335 "trtype": "TCP", 00:17:56.335 "adrfam": "IPv4", 00:17:56.335 "traddr": "10.0.0.1", 00:17:56.335 "trsvcid": "45866" 00:17:56.335 }, 00:17:56.335 "auth": { 00:17:56.335 "state": "completed", 00:17:56.335 "digest": "sha256", 00:17:56.335 "dhgroup": "ffdhe3072" 00:17:56.335 } 00:17:56.335 } 00:17:56.335 ]' 00:17:56.335 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.335 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:56.335 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.335 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:56.335 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.335 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.335 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.335 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.595 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:17:56.595 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:17:57.165 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.165 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:57.165 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.165 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.165 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.165 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:57.165 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.165 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:57.165 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:57.426 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:17:57.426 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.426 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:57.426 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:57.426 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:57.426 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.426 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.426 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.426 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.426 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.426 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.426 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.426 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.686 00:17:57.686 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.686 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.686 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.949 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.949 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.949 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.949 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.949 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.949 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.949 { 00:17:57.949 "cntlid": 25, 00:17:57.949 "qid": 0, 00:17:57.949 "state": "enabled", 00:17:57.949 "thread": "nvmf_tgt_poll_group_000", 00:17:57.949 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:57.949 "listen_address": { 00:17:57.949 "trtype": "TCP", 00:17:57.949 "adrfam": "IPv4", 00:17:57.949 "traddr": "10.0.0.2", 00:17:57.949 "trsvcid": "4420" 00:17:57.949 }, 00:17:57.949 "peer_address": { 00:17:57.949 "trtype": "TCP", 00:17:57.949 "adrfam": "IPv4", 00:17:57.949 "traddr": "10.0.0.1", 00:17:57.949 "trsvcid": "41522" 00:17:57.949 }, 00:17:57.949 "auth": { 00:17:57.949 "state": "completed", 00:17:57.949 "digest": "sha256", 00:17:57.949 "dhgroup": "ffdhe4096" 00:17:57.949 } 00:17:57.949 } 00:17:57.949 ]' 00:17:57.949 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.949 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:57.949 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.949 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:57.950 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.950 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.950 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.950 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.210 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:17:58.210 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:17:58.781 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.781 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.781 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:58.781 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.781 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.781 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.781 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.781 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:58.781 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:59.042 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:17:59.042 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.042 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:59.042 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:59.042 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:59.042 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.042 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.042 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.042 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.042 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.042 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.042 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.042 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.302 00:17:59.302 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.302 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.302 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.564 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.564 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.564 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.564 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.564 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.564 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.564 { 00:17:59.564 "cntlid": 27, 00:17:59.564 "qid": 0, 00:17:59.564 "state": "enabled", 00:17:59.564 "thread": "nvmf_tgt_poll_group_000", 00:17:59.564 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:59.564 "listen_address": { 00:17:59.564 "trtype": "TCP", 00:17:59.564 "adrfam": "IPv4", 00:17:59.564 "traddr": "10.0.0.2", 00:17:59.564 "trsvcid": "4420" 00:17:59.564 }, 00:17:59.564 "peer_address": { 00:17:59.564 "trtype": "TCP", 00:17:59.564 "adrfam": "IPv4", 00:17:59.564 "traddr": "10.0.0.1", 00:17:59.564 "trsvcid": "41546" 00:17:59.564 }, 00:17:59.564 "auth": { 00:17:59.564 "state": "completed", 00:17:59.564 "digest": "sha256", 00:17:59.564 "dhgroup": "ffdhe4096" 00:17:59.564 } 00:17:59.564 } 00:17:59.564 ]' 00:17:59.564 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.564 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:59.564 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.564 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:59.564 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.564 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.564 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.564 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.825 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: --dhchap-ctrl-secret DHHC-1:02:MTYyYTE1NWJlOGYzYmEwODZiOTk0OWM5MmYzOTNjNmMwNzI1ZDhhMTY3YzQyZWZhttwu1g==: 00:17:59.825 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: --dhchap-ctrl-secret DHHC-1:02:MTYyYTE1NWJlOGYzYmEwODZiOTk0OWM5MmYzOTNjNmMwNzI1ZDhhMTY3YzQyZWZhttwu1g==: 00:18:00.397 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.397 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:00.397 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.397 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.658 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.658 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.658 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:00.658 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:00.658 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:18:00.658 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.658 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:00.658 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:00.658 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:00.658 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.658 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.658 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.658 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.658 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.658 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.658 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.658 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.918 00:18:00.918 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.918 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.918 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.179 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.179 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.179 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.179 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.179 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.179 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.179 { 00:18:01.179 "cntlid": 29, 00:18:01.179 "qid": 0, 00:18:01.179 "state": "enabled", 00:18:01.179 "thread": "nvmf_tgt_poll_group_000", 00:18:01.179 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:01.179 "listen_address": { 00:18:01.179 "trtype": "TCP", 00:18:01.179 "adrfam": "IPv4", 00:18:01.179 "traddr": "10.0.0.2", 00:18:01.179 "trsvcid": "4420" 00:18:01.179 }, 00:18:01.179 "peer_address": { 00:18:01.179 "trtype": "TCP", 00:18:01.179 "adrfam": "IPv4", 00:18:01.179 "traddr": "10.0.0.1", 00:18:01.179 "trsvcid": "41560" 00:18:01.179 }, 00:18:01.179 "auth": { 00:18:01.179 "state": "completed", 00:18:01.179 "digest": "sha256", 00:18:01.179 "dhgroup": "ffdhe4096" 00:18:01.179 } 00:18:01.179 } 00:18:01.179 ]' 00:18:01.179 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.179 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:01.179 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.179 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:01.179 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.439 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.439 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.439 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.439 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:01:OWMwYjkyNmM1MmNhNDZmNDY2NmI3NWExOGU1OWU5NmFFWTPp: 00:18:01.439 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:01:OWMwYjkyNmM1MmNhNDZmNDY2NmI3NWExOGU1OWU5NmFFWTPp: 00:18:02.381 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.381 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:02.381 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.381 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.381 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.381 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.381 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:02.381 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:02.381 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:18:02.381 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.381 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:02.381 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:02.381 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:02.381 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.381 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:02.381 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.381 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.381 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.381 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:02.381 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:02.381 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:02.642 00:18:02.642 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.642 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.642 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.903 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.903 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.903 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.903 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.903 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.903 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.903 { 00:18:02.903 "cntlid": 31, 00:18:02.903 "qid": 0, 00:18:02.903 "state": "enabled", 00:18:02.903 "thread": "nvmf_tgt_poll_group_000", 00:18:02.903 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:02.903 "listen_address": { 00:18:02.903 "trtype": "TCP", 00:18:02.903 "adrfam": "IPv4", 00:18:02.903 "traddr": "10.0.0.2", 00:18:02.903 "trsvcid": "4420" 00:18:02.903 }, 00:18:02.903 "peer_address": { 00:18:02.903 "trtype": "TCP", 00:18:02.903 "adrfam": "IPv4", 00:18:02.903 "traddr": "10.0.0.1", 00:18:02.903 "trsvcid": "41596" 00:18:02.903 }, 00:18:02.903 "auth": { 00:18:02.903 "state": "completed", 00:18:02.903 "digest": "sha256", 00:18:02.903 "dhgroup": "ffdhe4096" 00:18:02.903 } 00:18:02.903 } 00:18:02.903 ]' 00:18:02.903 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.903 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:02.903 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.903 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:02.903 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.903 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.903 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.903 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.164 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:18:03.164 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:18:03.736 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.736 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:03.736 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.736 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.736 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.736 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.736 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.736 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:03.737 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:03.998 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:18:03.998 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.998 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:03.998 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:03.998 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:03.998 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.998 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.998 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.998 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.998 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.998 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.998 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.998 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.259 00:18:04.260 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.260 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.260 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.520 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.520 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.520 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.520 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.520 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.521 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.521 { 00:18:04.521 "cntlid": 33, 00:18:04.521 "qid": 0, 00:18:04.521 "state": "enabled", 00:18:04.521 "thread": "nvmf_tgt_poll_group_000", 00:18:04.521 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:04.521 "listen_address": { 00:18:04.521 "trtype": "TCP", 00:18:04.521 "adrfam": "IPv4", 00:18:04.521 "traddr": "10.0.0.2", 00:18:04.521 "trsvcid": "4420" 00:18:04.521 }, 00:18:04.521 "peer_address": { 00:18:04.521 "trtype": "TCP", 00:18:04.521 "adrfam": "IPv4", 00:18:04.521 "traddr": "10.0.0.1", 00:18:04.521 "trsvcid": "41628" 00:18:04.521 }, 00:18:04.521 "auth": { 00:18:04.521 "state": "completed", 00:18:04.521 "digest": "sha256", 00:18:04.521 "dhgroup": "ffdhe6144" 00:18:04.521 } 00:18:04.521 } 00:18:04.521 ]' 00:18:04.521 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.521 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:04.521 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.781 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:04.781 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.781 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.781 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.781 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.781 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:18:04.781 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:18:05.730 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.730 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:05.730 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.730 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.730 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.730 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.730 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:05.730 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:05.730 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:18:05.730 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.730 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:05.730 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:05.730 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:05.730 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.730 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.730 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.730 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.730 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.730 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.730 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.730 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.990 00:18:05.990 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.990 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.990 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.251 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.251 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.251 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.251 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.251 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.251 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.251 { 00:18:06.251 "cntlid": 35, 00:18:06.251 "qid": 0, 00:18:06.251 "state": "enabled", 00:18:06.251 "thread": "nvmf_tgt_poll_group_000", 00:18:06.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:06.251 "listen_address": { 00:18:06.251 "trtype": "TCP", 00:18:06.251 "adrfam": "IPv4", 00:18:06.251 "traddr": "10.0.0.2", 00:18:06.251 "trsvcid": "4420" 00:18:06.251 }, 00:18:06.251 "peer_address": { 00:18:06.251 "trtype": "TCP", 00:18:06.251 "adrfam": "IPv4", 00:18:06.251 "traddr": "10.0.0.1", 00:18:06.251 "trsvcid": "41660" 00:18:06.251 }, 00:18:06.251 "auth": { 00:18:06.251 "state": "completed", 00:18:06.251 "digest": "sha256", 00:18:06.251 "dhgroup": "ffdhe6144" 00:18:06.251 } 00:18:06.251 } 00:18:06.251 ]' 00:18:06.251 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.251 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:06.251 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.251 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:06.251 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.511 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.511 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.511 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.511 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: --dhchap-ctrl-secret DHHC-1:02:MTYyYTE1NWJlOGYzYmEwODZiOTk0OWM5MmYzOTNjNmMwNzI1ZDhhMTY3YzQyZWZhttwu1g==: 00:18:06.511 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: --dhchap-ctrl-secret DHHC-1:02:MTYyYTE1NWJlOGYzYmEwODZiOTk0OWM5MmYzOTNjNmMwNzI1ZDhhMTY3YzQyZWZhttwu1g==: 00:18:07.452 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.452 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:07.452 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.452 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.452 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.452 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.452 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:07.452 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:07.452 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:18:07.452 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.452 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:07.452 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:07.452 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:07.452 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.452 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.452 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.452 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.452 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.452 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.452 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.452 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.713 00:18:07.713 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.713 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.713 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.972 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.972 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.972 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.972 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.972 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.972 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.972 { 00:18:07.972 "cntlid": 37, 00:18:07.972 "qid": 0, 00:18:07.972 "state": "enabled", 00:18:07.972 "thread": "nvmf_tgt_poll_group_000", 00:18:07.972 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:07.972 "listen_address": { 00:18:07.972 "trtype": "TCP", 00:18:07.972 "adrfam": "IPv4", 00:18:07.972 "traddr": "10.0.0.2", 00:18:07.972 "trsvcid": "4420" 00:18:07.972 }, 00:18:07.972 "peer_address": { 00:18:07.972 "trtype": "TCP", 00:18:07.972 "adrfam": "IPv4", 00:18:07.972 "traddr": "10.0.0.1", 00:18:07.972 "trsvcid": "45834" 00:18:07.972 }, 00:18:07.972 "auth": { 00:18:07.972 "state": "completed", 00:18:07.972 "digest": "sha256", 00:18:07.972 "dhgroup": "ffdhe6144" 00:18:07.972 } 00:18:07.972 } 00:18:07.972 ]' 00:18:07.972 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.972 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:07.972 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.972 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:07.972 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.972 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.972 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.972 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.231 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:01:OWMwYjkyNmM1MmNhNDZmNDY2NmI3NWExOGU1OWU5NmFFWTPp: 00:18:08.231 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:01:OWMwYjkyNmM1MmNhNDZmNDY2NmI3NWExOGU1OWU5NmFFWTPp: 00:18:08.800 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.060 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:09.060 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.060 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.060 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.060 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.060 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:09.060 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:09.060 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:18:09.060 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.060 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:09.060 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:09.060 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:09.060 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.060 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:09.060 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.060 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.060 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.060 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:09.060 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:09.060 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:09.320 00:18:09.582 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.582 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.582 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.582 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.582 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.582 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.582 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.582 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.582 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.582 { 00:18:09.582 "cntlid": 39, 00:18:09.582 "qid": 0, 00:18:09.582 "state": "enabled", 00:18:09.582 "thread": "nvmf_tgt_poll_group_000", 00:18:09.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:09.582 "listen_address": { 00:18:09.582 "trtype": "TCP", 00:18:09.582 "adrfam": "IPv4", 00:18:09.582 "traddr": "10.0.0.2", 00:18:09.582 "trsvcid": "4420" 00:18:09.582 }, 00:18:09.582 "peer_address": { 00:18:09.582 "trtype": "TCP", 00:18:09.582 "adrfam": "IPv4", 00:18:09.582 "traddr": "10.0.0.1", 00:18:09.582 "trsvcid": "45864" 00:18:09.582 }, 00:18:09.582 "auth": { 00:18:09.582 "state": "completed", 00:18:09.582 "digest": "sha256", 00:18:09.582 "dhgroup": "ffdhe6144" 00:18:09.582 } 00:18:09.582 } 00:18:09.582 ]' 00:18:09.582 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.582 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:09.582 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.842 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:09.842 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.842 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.842 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.842 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.103 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:18:10.104 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:18:10.674 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.674 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:10.674 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.674 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.674 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.674 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:10.674 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.674 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:10.674 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:10.674 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:18:10.674 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.674 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:10.674 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:10.674 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:10.674 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.674 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.674 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.674 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.934 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.934 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.934 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.934 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.194 00:18:11.195 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.195 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.195 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.455 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.455 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.455 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.455 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.455 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.456 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.456 { 00:18:11.456 "cntlid": 41, 00:18:11.456 "qid": 0, 00:18:11.456 "state": "enabled", 00:18:11.456 "thread": "nvmf_tgt_poll_group_000", 00:18:11.456 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:11.456 "listen_address": { 00:18:11.456 "trtype": "TCP", 00:18:11.456 "adrfam": "IPv4", 00:18:11.456 "traddr": "10.0.0.2", 00:18:11.456 "trsvcid": "4420" 00:18:11.456 }, 00:18:11.456 "peer_address": { 00:18:11.456 "trtype": "TCP", 00:18:11.456 "adrfam": "IPv4", 00:18:11.456 "traddr": "10.0.0.1", 00:18:11.456 "trsvcid": "45890" 00:18:11.456 }, 00:18:11.456 "auth": { 00:18:11.456 "state": "completed", 00:18:11.456 "digest": "sha256", 00:18:11.456 "dhgroup": "ffdhe8192" 00:18:11.456 } 00:18:11.456 } 00:18:11.456 ]' 00:18:11.456 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.456 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:11.456 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.456 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:11.456 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.717 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.717 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.717 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.717 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:18:11.717 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:18:12.659 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.659 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:12.659 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.659 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.659 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.659 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.659 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:12.659 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:12.659 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:18:12.659 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.659 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:12.659 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:12.659 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:12.659 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.659 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.659 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.659 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.659 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.659 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.659 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.659 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.230 00:18:13.230 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.230 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.230 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.230 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.230 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.230 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.230 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.492 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.492 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.492 { 00:18:13.492 "cntlid": 43, 00:18:13.492 "qid": 0, 00:18:13.492 "state": "enabled", 00:18:13.492 "thread": "nvmf_tgt_poll_group_000", 00:18:13.492 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:13.492 "listen_address": { 00:18:13.492 "trtype": "TCP", 00:18:13.492 "adrfam": "IPv4", 00:18:13.492 "traddr": "10.0.0.2", 00:18:13.492 "trsvcid": "4420" 00:18:13.492 }, 00:18:13.492 "peer_address": { 00:18:13.492 "trtype": "TCP", 00:18:13.492 "adrfam": "IPv4", 00:18:13.492 "traddr": "10.0.0.1", 00:18:13.492 "trsvcid": "45916" 00:18:13.492 }, 00:18:13.492 "auth": { 00:18:13.492 "state": "completed", 00:18:13.492 "digest": "sha256", 00:18:13.492 "dhgroup": "ffdhe8192" 00:18:13.492 } 00:18:13.492 } 00:18:13.492 ]' 00:18:13.492 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.492 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:13.492 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.492 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:13.492 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.492 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.492 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.492 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.753 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: --dhchap-ctrl-secret DHHC-1:02:MTYyYTE1NWJlOGYzYmEwODZiOTk0OWM5MmYzOTNjNmMwNzI1ZDhhMTY3YzQyZWZhttwu1g==: 00:18:13.753 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: --dhchap-ctrl-secret DHHC-1:02:MTYyYTE1NWJlOGYzYmEwODZiOTk0OWM5MmYzOTNjNmMwNzI1ZDhhMTY3YzQyZWZhttwu1g==: 00:18:14.325 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.325 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:14.325 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.325 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.325 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.325 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.325 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:14.325 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:14.587 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:18:14.587 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.587 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:14.587 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:14.587 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:14.587 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.587 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.587 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.587 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.587 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.587 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.587 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.587 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.157 00:18:15.157 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.157 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.157 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.157 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.157 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.157 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.157 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.158 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.158 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.158 { 00:18:15.158 "cntlid": 45, 00:18:15.158 "qid": 0, 00:18:15.158 "state": "enabled", 00:18:15.158 "thread": "nvmf_tgt_poll_group_000", 00:18:15.158 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:15.158 "listen_address": { 00:18:15.158 "trtype": "TCP", 00:18:15.158 "adrfam": "IPv4", 00:18:15.158 "traddr": "10.0.0.2", 00:18:15.158 "trsvcid": "4420" 00:18:15.158 }, 00:18:15.158 "peer_address": { 00:18:15.158 "trtype": "TCP", 00:18:15.158 "adrfam": "IPv4", 00:18:15.158 "traddr": "10.0.0.1", 00:18:15.158 "trsvcid": "45954" 00:18:15.158 }, 00:18:15.158 "auth": { 00:18:15.158 "state": "completed", 00:18:15.158 "digest": "sha256", 00:18:15.158 "dhgroup": "ffdhe8192" 00:18:15.158 } 00:18:15.158 } 00:18:15.158 ]' 00:18:15.158 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.417 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:15.417 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.417 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:15.417 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.417 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.417 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.417 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.677 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:01:OWMwYjkyNmM1MmNhNDZmNDY2NmI3NWExOGU1OWU5NmFFWTPp: 00:18:15.677 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:01:OWMwYjkyNmM1MmNhNDZmNDY2NmI3NWExOGU1OWU5NmFFWTPp: 00:18:16.247 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.247 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:16.247 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.247 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.247 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.247 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.247 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:16.247 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:16.507 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:18:16.507 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:16.507 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:16.507 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:16.507 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:16.507 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.507 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:16.507 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.507 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.507 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.507 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:16.507 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:16.507 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:17.077 00:18:17.077 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.077 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.077 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.077 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.077 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.077 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.077 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.078 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.078 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.078 { 00:18:17.078 "cntlid": 47, 00:18:17.078 "qid": 0, 00:18:17.078 "state": "enabled", 00:18:17.078 "thread": "nvmf_tgt_poll_group_000", 00:18:17.078 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:17.078 "listen_address": { 00:18:17.078 "trtype": "TCP", 00:18:17.078 "adrfam": "IPv4", 00:18:17.078 "traddr": "10.0.0.2", 00:18:17.078 "trsvcid": "4420" 00:18:17.078 }, 00:18:17.078 "peer_address": { 00:18:17.078 "trtype": "TCP", 00:18:17.078 "adrfam": "IPv4", 00:18:17.078 "traddr": "10.0.0.1", 00:18:17.078 "trsvcid": "45972" 00:18:17.078 }, 00:18:17.078 "auth": { 00:18:17.078 "state": "completed", 00:18:17.078 "digest": "sha256", 00:18:17.078 "dhgroup": "ffdhe8192" 00:18:17.078 } 00:18:17.078 } 00:18:17.078 ]' 00:18:17.078 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.078 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:17.078 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.337 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:17.337 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.337 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.337 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.337 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.337 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:18:17.337 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:18:18.275 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.275 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:18.275 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.275 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.275 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.275 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:18.275 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:18.275 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.275 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:18.275 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:18.275 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:18:18.275 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.275 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:18.275 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:18.275 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:18.275 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.275 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.275 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.275 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.275 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.275 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.275 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.275 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.535 00:18:18.535 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.535 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.536 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.797 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.797 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.797 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.797 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.797 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.797 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.797 { 00:18:18.797 "cntlid": 49, 00:18:18.797 "qid": 0, 00:18:18.797 "state": "enabled", 00:18:18.797 "thread": "nvmf_tgt_poll_group_000", 00:18:18.797 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:18.797 "listen_address": { 00:18:18.797 "trtype": "TCP", 00:18:18.797 "adrfam": "IPv4", 00:18:18.797 "traddr": "10.0.0.2", 00:18:18.797 "trsvcid": "4420" 00:18:18.797 }, 00:18:18.797 "peer_address": { 00:18:18.797 "trtype": "TCP", 00:18:18.797 "adrfam": "IPv4", 00:18:18.797 "traddr": "10.0.0.1", 00:18:18.797 "trsvcid": "37366" 00:18:18.797 }, 00:18:18.797 "auth": { 00:18:18.797 "state": "completed", 00:18:18.797 "digest": "sha384", 00:18:18.797 "dhgroup": "null" 00:18:18.797 } 00:18:18.797 } 00:18:18.797 ]' 00:18:18.797 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.797 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:18.797 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.797 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:18.797 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.797 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.797 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.797 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.057 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:18:19.057 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:18:19.630 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.630 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:19.630 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.630 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.630 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.630 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.630 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:19.630 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:19.892 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:18:19.892 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.892 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:19.892 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:19.892 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:19.892 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.892 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.892 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.892 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.892 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.892 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.892 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.892 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.153 00:18:20.153 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.153 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.153 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.153 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.153 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.153 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.153 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.413 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.413 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.413 { 00:18:20.413 "cntlid": 51, 00:18:20.413 "qid": 0, 00:18:20.413 "state": "enabled", 00:18:20.413 "thread": "nvmf_tgt_poll_group_000", 00:18:20.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:20.413 "listen_address": { 00:18:20.413 "trtype": "TCP", 00:18:20.413 "adrfam": "IPv4", 00:18:20.413 "traddr": "10.0.0.2", 00:18:20.413 "trsvcid": "4420" 00:18:20.413 }, 00:18:20.413 "peer_address": { 00:18:20.413 "trtype": "TCP", 00:18:20.413 "adrfam": "IPv4", 00:18:20.413 "traddr": "10.0.0.1", 00:18:20.413 "trsvcid": "37386" 00:18:20.413 }, 00:18:20.413 "auth": { 00:18:20.413 "state": "completed", 00:18:20.413 "digest": "sha384", 00:18:20.413 "dhgroup": "null" 00:18:20.413 } 00:18:20.413 } 00:18:20.413 ]' 00:18:20.413 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.413 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:20.413 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.413 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:20.413 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.413 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.413 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.413 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.674 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: --dhchap-ctrl-secret DHHC-1:02:MTYyYTE1NWJlOGYzYmEwODZiOTk0OWM5MmYzOTNjNmMwNzI1ZDhhMTY3YzQyZWZhttwu1g==: 00:18:20.674 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: --dhchap-ctrl-secret DHHC-1:02:MTYyYTE1NWJlOGYzYmEwODZiOTk0OWM5MmYzOTNjNmMwNzI1ZDhhMTY3YzQyZWZhttwu1g==: 00:18:21.245 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.245 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:21.245 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.245 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.245 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.245 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:21.245 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:21.245 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:21.506 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:18:21.506 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:21.506 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:21.506 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:21.506 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:21.506 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.506 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.506 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.506 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.506 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.506 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.506 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.506 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.767 00:18:21.767 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:21.767 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:21.767 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.767 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.767 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.767 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.767 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.028 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.028 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.028 { 00:18:22.028 "cntlid": 53, 00:18:22.028 "qid": 0, 00:18:22.028 "state": "enabled", 00:18:22.028 "thread": "nvmf_tgt_poll_group_000", 00:18:22.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:22.028 "listen_address": { 00:18:22.028 "trtype": "TCP", 00:18:22.028 "adrfam": "IPv4", 00:18:22.028 "traddr": "10.0.0.2", 00:18:22.028 "trsvcid": "4420" 00:18:22.028 }, 00:18:22.028 "peer_address": { 00:18:22.028 "trtype": "TCP", 00:18:22.028 "adrfam": "IPv4", 00:18:22.028 "traddr": "10.0.0.1", 00:18:22.028 "trsvcid": "37420" 00:18:22.028 }, 00:18:22.028 "auth": { 00:18:22.028 "state": "completed", 00:18:22.028 "digest": "sha384", 00:18:22.028 "dhgroup": "null" 00:18:22.028 } 00:18:22.028 } 00:18:22.028 ]' 00:18:22.028 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.028 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:22.028 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.028 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:22.029 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.029 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.029 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.029 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.289 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:01:OWMwYjkyNmM1MmNhNDZmNDY2NmI3NWExOGU1OWU5NmFFWTPp: 00:18:22.289 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:01:OWMwYjkyNmM1MmNhNDZmNDY2NmI3NWExOGU1OWU5NmFFWTPp: 00:18:22.861 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.861 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:22.861 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.861 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.861 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.861 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.861 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:22.861 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:23.122 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:18:23.122 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.122 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:23.122 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:23.122 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:23.122 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.122 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:23.122 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.122 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.122 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.122 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:23.122 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:23.122 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:23.383 00:18:23.383 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.383 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.383 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.383 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.383 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.383 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.383 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.383 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.383 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.383 { 00:18:23.383 "cntlid": 55, 00:18:23.383 "qid": 0, 00:18:23.383 "state": "enabled", 00:18:23.383 "thread": "nvmf_tgt_poll_group_000", 00:18:23.383 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:23.383 "listen_address": { 00:18:23.383 "trtype": "TCP", 00:18:23.383 "adrfam": "IPv4", 00:18:23.383 "traddr": "10.0.0.2", 00:18:23.383 "trsvcid": "4420" 00:18:23.383 }, 00:18:23.383 "peer_address": { 00:18:23.383 "trtype": "TCP", 00:18:23.383 "adrfam": "IPv4", 00:18:23.383 "traddr": "10.0.0.1", 00:18:23.383 "trsvcid": "37450" 00:18:23.383 }, 00:18:23.383 "auth": { 00:18:23.383 "state": "completed", 00:18:23.383 "digest": "sha384", 00:18:23.383 "dhgroup": "null" 00:18:23.383 } 00:18:23.383 } 00:18:23.383 ]' 00:18:23.383 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.643 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:23.643 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.643 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:23.643 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.643 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.643 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.643 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.903 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:18:23.903 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:18:24.476 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.476 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:24.476 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.476 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.476 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.476 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:24.476 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.476 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:24.476 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:24.736 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:18:24.736 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.736 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:24.736 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:24.736 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:24.736 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.736 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.736 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.736 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.736 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.736 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.736 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.736 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.736 00:18:24.997 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.997 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.997 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.997 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.997 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.997 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.997 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.997 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.997 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.997 { 00:18:24.997 "cntlid": 57, 00:18:24.997 "qid": 0, 00:18:24.997 "state": "enabled", 00:18:24.997 "thread": "nvmf_tgt_poll_group_000", 00:18:24.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:24.997 "listen_address": { 00:18:24.997 "trtype": "TCP", 00:18:24.997 "adrfam": "IPv4", 00:18:24.997 "traddr": "10.0.0.2", 00:18:24.997 "trsvcid": "4420" 00:18:24.997 }, 00:18:24.997 "peer_address": { 00:18:24.997 "trtype": "TCP", 00:18:24.997 "adrfam": "IPv4", 00:18:24.997 "traddr": "10.0.0.1", 00:18:24.997 "trsvcid": "37476" 00:18:24.997 }, 00:18:24.997 "auth": { 00:18:24.997 "state": "completed", 00:18:24.997 "digest": "sha384", 00:18:24.997 "dhgroup": "ffdhe2048" 00:18:24.997 } 00:18:24.997 } 00:18:24.997 ]' 00:18:24.997 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.997 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:24.997 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.257 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:25.257 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.257 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.257 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.257 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.257 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:18:25.258 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:18:26.198 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.198 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:26.198 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.198 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.198 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.198 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:26.198 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:26.198 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:26.198 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:18:26.198 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.198 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:26.198 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:26.198 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:26.198 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.199 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.199 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.199 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.199 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.199 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.199 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.199 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.459 00:18:26.459 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.459 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.459 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.720 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.720 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.720 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.720 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.720 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.720 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:26.720 { 00:18:26.720 "cntlid": 59, 00:18:26.720 "qid": 0, 00:18:26.720 "state": "enabled", 00:18:26.720 "thread": "nvmf_tgt_poll_group_000", 00:18:26.720 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:26.720 "listen_address": { 00:18:26.720 "trtype": "TCP", 00:18:26.720 "adrfam": "IPv4", 00:18:26.720 "traddr": "10.0.0.2", 00:18:26.720 "trsvcid": "4420" 00:18:26.720 }, 00:18:26.720 "peer_address": { 00:18:26.720 "trtype": "TCP", 00:18:26.720 "adrfam": "IPv4", 00:18:26.720 "traddr": "10.0.0.1", 00:18:26.720 "trsvcid": "37508" 00:18:26.720 }, 00:18:26.720 "auth": { 00:18:26.720 "state": "completed", 00:18:26.720 "digest": "sha384", 00:18:26.720 "dhgroup": "ffdhe2048" 00:18:26.720 } 00:18:26.720 } 00:18:26.720 ]' 00:18:26.720 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:26.720 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:26.720 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:26.720 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:26.720 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:26.720 08:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.720 08:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.720 08:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.981 08:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: --dhchap-ctrl-secret DHHC-1:02:MTYyYTE1NWJlOGYzYmEwODZiOTk0OWM5MmYzOTNjNmMwNzI1ZDhhMTY3YzQyZWZhttwu1g==: 00:18:26.981 08:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: --dhchap-ctrl-secret DHHC-1:02:MTYyYTE1NWJlOGYzYmEwODZiOTk0OWM5MmYzOTNjNmMwNzI1ZDhhMTY3YzQyZWZhttwu1g==: 00:18:27.923 08:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.923 08:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:27.923 08:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.923 08:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.923 08:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.923 08:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:27.923 08:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:27.923 08:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:27.923 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:18:27.923 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:27.923 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:27.923 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:27.923 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:27.923 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.923 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.923 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.923 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.923 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.923 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.923 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.923 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.183 00:18:28.183 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.183 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.183 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.445 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.445 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.445 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.445 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.445 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.445 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.445 { 00:18:28.445 "cntlid": 61, 00:18:28.445 "qid": 0, 00:18:28.445 "state": "enabled", 00:18:28.445 "thread": "nvmf_tgt_poll_group_000", 00:18:28.445 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:28.445 "listen_address": { 00:18:28.445 "trtype": "TCP", 00:18:28.445 "adrfam": "IPv4", 00:18:28.445 "traddr": "10.0.0.2", 00:18:28.445 "trsvcid": "4420" 00:18:28.445 }, 00:18:28.445 "peer_address": { 00:18:28.445 "trtype": "TCP", 00:18:28.445 "adrfam": "IPv4", 00:18:28.445 "traddr": "10.0.0.1", 00:18:28.445 "trsvcid": "51236" 00:18:28.445 }, 00:18:28.445 "auth": { 00:18:28.445 "state": "completed", 00:18:28.445 "digest": "sha384", 00:18:28.445 "dhgroup": "ffdhe2048" 00:18:28.445 } 00:18:28.445 } 00:18:28.445 ]' 00:18:28.445 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.445 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:28.445 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.445 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:28.445 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.445 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.445 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.445 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.736 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:01:OWMwYjkyNmM1MmNhNDZmNDY2NmI3NWExOGU1OWU5NmFFWTPp: 00:18:28.736 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:01:OWMwYjkyNmM1MmNhNDZmNDY2NmI3NWExOGU1OWU5NmFFWTPp: 00:18:29.341 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.341 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:29.341 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.341 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.341 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.341 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:29.341 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:29.341 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:29.602 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:18:29.602 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.602 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:29.602 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:29.602 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:29.602 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.602 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:29.602 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.602 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.602 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.602 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:29.602 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:29.602 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:29.863 00:18:29.863 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:29.863 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:29.863 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.863 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.863 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.863 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.863 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.863 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.863 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:29.863 { 00:18:29.863 "cntlid": 63, 00:18:29.863 "qid": 0, 00:18:29.863 "state": "enabled", 00:18:29.863 "thread": "nvmf_tgt_poll_group_000", 00:18:29.863 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:29.863 "listen_address": { 00:18:29.863 "trtype": "TCP", 00:18:29.863 "adrfam": "IPv4", 00:18:29.863 "traddr": "10.0.0.2", 00:18:29.863 "trsvcid": "4420" 00:18:29.863 }, 00:18:29.863 "peer_address": { 00:18:29.863 "trtype": "TCP", 00:18:29.863 "adrfam": "IPv4", 00:18:29.863 "traddr": "10.0.0.1", 00:18:29.863 "trsvcid": "51264" 00:18:29.863 }, 00:18:29.863 "auth": { 00:18:29.863 "state": "completed", 00:18:29.863 "digest": "sha384", 00:18:29.863 "dhgroup": "ffdhe2048" 00:18:29.863 } 00:18:29.863 } 00:18:29.863 ]' 00:18:29.863 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.125 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:30.125 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.125 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:30.125 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.125 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.126 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.126 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.388 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:18:30.388 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:18:30.961 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.961 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:30.961 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.961 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.961 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.961 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:30.961 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:30.961 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:30.961 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:31.222 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:18:31.222 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:31.222 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:31.222 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:31.222 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:31.222 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.222 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.222 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.222 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.222 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.222 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.222 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.222 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.483 00:18:31.483 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:31.483 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.483 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.483 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.483 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.483 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.483 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.483 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.483 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.483 { 00:18:31.483 "cntlid": 65, 00:18:31.483 "qid": 0, 00:18:31.483 "state": "enabled", 00:18:31.483 "thread": "nvmf_tgt_poll_group_000", 00:18:31.483 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:31.483 "listen_address": { 00:18:31.483 "trtype": "TCP", 00:18:31.483 "adrfam": "IPv4", 00:18:31.483 "traddr": "10.0.0.2", 00:18:31.483 "trsvcid": "4420" 00:18:31.483 }, 00:18:31.483 "peer_address": { 00:18:31.483 "trtype": "TCP", 00:18:31.483 "adrfam": "IPv4", 00:18:31.483 "traddr": "10.0.0.1", 00:18:31.483 "trsvcid": "51298" 00:18:31.483 }, 00:18:31.483 "auth": { 00:18:31.483 "state": "completed", 00:18:31.483 "digest": "sha384", 00:18:31.483 "dhgroup": "ffdhe3072" 00:18:31.483 } 00:18:31.483 } 00:18:31.483 ]' 00:18:31.483 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:31.744 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:31.744 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:31.744 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:31.744 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:31.744 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.744 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.744 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.006 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:18:32.006 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:18:32.577 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.577 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:32.577 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.577 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.577 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.577 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:32.577 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:32.577 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:32.838 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:18:32.838 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:32.838 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:32.838 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:32.838 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:32.838 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.838 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.838 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.838 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.838 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.838 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.838 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.838 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.838 00:18:33.099 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.099 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.099 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.099 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.099 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.099 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.099 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.099 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.099 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:33.099 { 00:18:33.099 "cntlid": 67, 00:18:33.099 "qid": 0, 00:18:33.099 "state": "enabled", 00:18:33.099 "thread": "nvmf_tgt_poll_group_000", 00:18:33.099 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:33.099 "listen_address": { 00:18:33.099 "trtype": "TCP", 00:18:33.099 "adrfam": "IPv4", 00:18:33.099 "traddr": "10.0.0.2", 00:18:33.099 "trsvcid": "4420" 00:18:33.099 }, 00:18:33.099 "peer_address": { 00:18:33.099 "trtype": "TCP", 00:18:33.099 "adrfam": "IPv4", 00:18:33.099 "traddr": "10.0.0.1", 00:18:33.099 "trsvcid": "51318" 00:18:33.099 }, 00:18:33.099 "auth": { 00:18:33.099 "state": "completed", 00:18:33.099 "digest": "sha384", 00:18:33.099 "dhgroup": "ffdhe3072" 00:18:33.099 } 00:18:33.099 } 00:18:33.099 ]' 00:18:33.099 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:33.359 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:33.359 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:33.359 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:33.359 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:33.359 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.359 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.359 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.621 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: --dhchap-ctrl-secret DHHC-1:02:MTYyYTE1NWJlOGYzYmEwODZiOTk0OWM5MmYzOTNjNmMwNzI1ZDhhMTY3YzQyZWZhttwu1g==: 00:18:33.621 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: --dhchap-ctrl-secret DHHC-1:02:MTYyYTE1NWJlOGYzYmEwODZiOTk0OWM5MmYzOTNjNmMwNzI1ZDhhMTY3YzQyZWZhttwu1g==: 00:18:34.192 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.192 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:34.192 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.192 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.192 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.192 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:34.192 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:34.192 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:34.452 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:18:34.452 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.452 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:34.452 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:34.452 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:34.452 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.452 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.452 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.452 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.452 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.452 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.452 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.452 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.713 00:18:34.713 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:34.713 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:34.713 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.713 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.713 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.713 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.713 08:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.973 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.973 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:34.973 { 00:18:34.973 "cntlid": 69, 00:18:34.973 "qid": 0, 00:18:34.973 "state": "enabled", 00:18:34.973 "thread": "nvmf_tgt_poll_group_000", 00:18:34.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:34.973 "listen_address": { 00:18:34.973 "trtype": "TCP", 00:18:34.973 "adrfam": "IPv4", 00:18:34.973 "traddr": "10.0.0.2", 00:18:34.973 "trsvcid": "4420" 00:18:34.973 }, 00:18:34.973 "peer_address": { 00:18:34.973 "trtype": "TCP", 00:18:34.973 "adrfam": "IPv4", 00:18:34.973 "traddr": "10.0.0.1", 00:18:34.973 "trsvcid": "51348" 00:18:34.973 }, 00:18:34.973 "auth": { 00:18:34.973 "state": "completed", 00:18:34.973 "digest": "sha384", 00:18:34.973 "dhgroup": "ffdhe3072" 00:18:34.973 } 00:18:34.973 } 00:18:34.973 ]' 00:18:34.973 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:34.973 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:34.973 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:34.973 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:34.973 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:34.973 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.973 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.973 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.234 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:01:OWMwYjkyNmM1MmNhNDZmNDY2NmI3NWExOGU1OWU5NmFFWTPp: 00:18:35.234 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:01:OWMwYjkyNmM1MmNhNDZmNDY2NmI3NWExOGU1OWU5NmFFWTPp: 00:18:35.804 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.804 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:35.804 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.804 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.804 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.804 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:35.804 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:35.804 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:36.064 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:18:36.064 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.064 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:36.064 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:36.064 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:36.064 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.064 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:36.064 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.064 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.064 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.064 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:36.064 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:36.064 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:36.324 00:18:36.324 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.324 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.324 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.324 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.324 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.324 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.324 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.324 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.324 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.324 { 00:18:36.324 "cntlid": 71, 00:18:36.324 "qid": 0, 00:18:36.324 "state": "enabled", 00:18:36.324 "thread": "nvmf_tgt_poll_group_000", 00:18:36.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:36.324 "listen_address": { 00:18:36.324 "trtype": "TCP", 00:18:36.324 "adrfam": "IPv4", 00:18:36.324 "traddr": "10.0.0.2", 00:18:36.324 "trsvcid": "4420" 00:18:36.324 }, 00:18:36.325 "peer_address": { 00:18:36.325 "trtype": "TCP", 00:18:36.325 "adrfam": "IPv4", 00:18:36.325 "traddr": "10.0.0.1", 00:18:36.325 "trsvcid": "51372" 00:18:36.325 }, 00:18:36.325 "auth": { 00:18:36.325 "state": "completed", 00:18:36.325 "digest": "sha384", 00:18:36.325 "dhgroup": "ffdhe3072" 00:18:36.325 } 00:18:36.325 } 00:18:36.325 ]' 00:18:36.325 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.585 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:36.585 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:36.585 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:36.585 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:36.585 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.585 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.585 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.845 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:18:36.845 08:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:18:37.416 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.417 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.417 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.417 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.417 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.417 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:37.417 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:37.417 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:37.417 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:37.678 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:18:37.678 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:37.678 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:37.678 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:37.678 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:37.678 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.678 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.678 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.678 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.678 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.678 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.678 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.678 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.940 00:18:37.940 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:37.940 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:37.940 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.940 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.940 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.940 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.940 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.940 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.940 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:37.940 { 00:18:37.940 "cntlid": 73, 00:18:37.940 "qid": 0, 00:18:37.940 "state": "enabled", 00:18:37.940 "thread": "nvmf_tgt_poll_group_000", 00:18:37.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:37.940 "listen_address": { 00:18:37.940 "trtype": "TCP", 00:18:37.940 "adrfam": "IPv4", 00:18:37.940 "traddr": "10.0.0.2", 00:18:37.940 "trsvcid": "4420" 00:18:37.940 }, 00:18:37.940 "peer_address": { 00:18:37.941 "trtype": "TCP", 00:18:37.941 "adrfam": "IPv4", 00:18:37.941 "traddr": "10.0.0.1", 00:18:37.941 "trsvcid": "54146" 00:18:37.941 }, 00:18:37.941 "auth": { 00:18:37.941 "state": "completed", 00:18:37.941 "digest": "sha384", 00:18:37.941 "dhgroup": "ffdhe4096" 00:18:37.941 } 00:18:37.941 } 00:18:37.941 ]' 00:18:37.941 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.201 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:38.201 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.201 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:38.201 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.201 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.201 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.201 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.460 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:18:38.460 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:18:39.029 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.029 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:39.029 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.029 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.029 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.029 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:39.029 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:39.029 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:39.288 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:18:39.288 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.288 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:39.288 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:39.288 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:39.288 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.288 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.288 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.288 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.288 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.288 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.288 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.288 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.549 00:18:39.549 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:39.549 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:39.549 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.810 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.810 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.810 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.810 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.810 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.810 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:39.810 { 00:18:39.810 "cntlid": 75, 00:18:39.810 "qid": 0, 00:18:39.810 "state": "enabled", 00:18:39.810 "thread": "nvmf_tgt_poll_group_000", 00:18:39.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:39.810 "listen_address": { 00:18:39.810 "trtype": "TCP", 00:18:39.810 "adrfam": "IPv4", 00:18:39.810 "traddr": "10.0.0.2", 00:18:39.810 "trsvcid": "4420" 00:18:39.810 }, 00:18:39.810 "peer_address": { 00:18:39.810 "trtype": "TCP", 00:18:39.810 "adrfam": "IPv4", 00:18:39.810 "traddr": "10.0.0.1", 00:18:39.810 "trsvcid": "54168" 00:18:39.810 }, 00:18:39.810 "auth": { 00:18:39.810 "state": "completed", 00:18:39.810 "digest": "sha384", 00:18:39.810 "dhgroup": "ffdhe4096" 00:18:39.810 } 00:18:39.810 } 00:18:39.810 ]' 00:18:39.810 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:39.810 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:39.810 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:39.810 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:39.810 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:39.810 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.810 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.810 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.070 08:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: --dhchap-ctrl-secret DHHC-1:02:MTYyYTE1NWJlOGYzYmEwODZiOTk0OWM5MmYzOTNjNmMwNzI1ZDhhMTY3YzQyZWZhttwu1g==: 00:18:40.071 08:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: --dhchap-ctrl-secret DHHC-1:02:MTYyYTE1NWJlOGYzYmEwODZiOTk0OWM5MmYzOTNjNmMwNzI1ZDhhMTY3YzQyZWZhttwu1g==: 00:18:40.642 08:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.642 08:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:40.642 08:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.642 08:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.642 08:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.642 08:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:40.642 08:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:40.642 08:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:40.903 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:18:40.903 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:40.903 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:40.903 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:40.903 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:40.903 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.903 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.903 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.903 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.903 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.903 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.903 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.903 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.163 00:18:41.163 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:41.163 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:41.163 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.424 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.424 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.424 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.424 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.424 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.424 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:41.424 { 00:18:41.424 "cntlid": 77, 00:18:41.424 "qid": 0, 00:18:41.424 "state": "enabled", 00:18:41.424 "thread": "nvmf_tgt_poll_group_000", 00:18:41.424 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:41.424 "listen_address": { 00:18:41.424 "trtype": "TCP", 00:18:41.424 "adrfam": "IPv4", 00:18:41.424 "traddr": "10.0.0.2", 00:18:41.424 "trsvcid": "4420" 00:18:41.424 }, 00:18:41.425 "peer_address": { 00:18:41.425 "trtype": "TCP", 00:18:41.425 "adrfam": "IPv4", 00:18:41.425 "traddr": "10.0.0.1", 00:18:41.425 "trsvcid": "54206" 00:18:41.425 }, 00:18:41.425 "auth": { 00:18:41.425 "state": "completed", 00:18:41.425 "digest": "sha384", 00:18:41.425 "dhgroup": "ffdhe4096" 00:18:41.425 } 00:18:41.425 } 00:18:41.425 ]' 00:18:41.425 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:41.425 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:41.425 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:41.425 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:41.425 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:41.425 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.425 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.425 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.685 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:01:OWMwYjkyNmM1MmNhNDZmNDY2NmI3NWExOGU1OWU5NmFFWTPp: 00:18:41.685 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:01:OWMwYjkyNmM1MmNhNDZmNDY2NmI3NWExOGU1OWU5NmFFWTPp: 00:18:42.258 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.258 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:42.258 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.258 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.258 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.258 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:42.258 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:42.258 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:42.519 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:18:42.519 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:42.519 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:42.519 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:42.519 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:42.519 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.519 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:42.519 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.519 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.519 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.519 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:42.519 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:42.519 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:42.779 00:18:42.779 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:42.779 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:42.779 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.040 08:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.040 08:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.040 08:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.040 08:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.040 08:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.040 08:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.040 { 00:18:43.040 "cntlid": 79, 00:18:43.040 "qid": 0, 00:18:43.040 "state": "enabled", 00:18:43.040 "thread": "nvmf_tgt_poll_group_000", 00:18:43.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:43.040 "listen_address": { 00:18:43.040 "trtype": "TCP", 00:18:43.040 "adrfam": "IPv4", 00:18:43.040 "traddr": "10.0.0.2", 00:18:43.040 "trsvcid": "4420" 00:18:43.040 }, 00:18:43.040 "peer_address": { 00:18:43.040 "trtype": "TCP", 00:18:43.040 "adrfam": "IPv4", 00:18:43.040 "traddr": "10.0.0.1", 00:18:43.040 "trsvcid": "54230" 00:18:43.040 }, 00:18:43.040 "auth": { 00:18:43.040 "state": "completed", 00:18:43.040 "digest": "sha384", 00:18:43.040 "dhgroup": "ffdhe4096" 00:18:43.040 } 00:18:43.040 } 00:18:43.040 ]' 00:18:43.040 08:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:43.040 08:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:43.040 08:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:43.040 08:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:43.040 08:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:43.040 08:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.040 08:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.040 08:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.301 08:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:18:43.301 08:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:18:43.871 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.871 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:43.871 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.871 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.871 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.871 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:43.871 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:43.871 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:43.871 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:44.131 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:18:44.131 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:44.131 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:44.131 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:44.131 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:44.131 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.131 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.132 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.132 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.132 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.132 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.132 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.132 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.392 00:18:44.392 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:44.392 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:44.392 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.653 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.653 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.653 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.653 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.653 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.653 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:44.653 { 00:18:44.653 "cntlid": 81, 00:18:44.653 "qid": 0, 00:18:44.653 "state": "enabled", 00:18:44.653 "thread": "nvmf_tgt_poll_group_000", 00:18:44.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:44.653 "listen_address": { 00:18:44.653 "trtype": "TCP", 00:18:44.653 "adrfam": "IPv4", 00:18:44.653 "traddr": "10.0.0.2", 00:18:44.653 "trsvcid": "4420" 00:18:44.653 }, 00:18:44.653 "peer_address": { 00:18:44.653 "trtype": "TCP", 00:18:44.653 "adrfam": "IPv4", 00:18:44.653 "traddr": "10.0.0.1", 00:18:44.653 "trsvcid": "54252" 00:18:44.653 }, 00:18:44.653 "auth": { 00:18:44.653 "state": "completed", 00:18:44.653 "digest": "sha384", 00:18:44.653 "dhgroup": "ffdhe6144" 00:18:44.653 } 00:18:44.653 } 00:18:44.653 ]' 00:18:44.653 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:44.653 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:44.653 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:44.653 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:44.653 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:44.913 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.913 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.913 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.913 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:18:44.913 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:18:45.484 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.744 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:45.744 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.744 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.744 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.744 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:45.744 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:45.744 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:45.744 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:18:45.744 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:45.744 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:45.744 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:45.744 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:45.744 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.744 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.744 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.744 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.744 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.744 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.744 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.744 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.005 00:18:46.266 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:46.266 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:46.266 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.266 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.266 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.266 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.266 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.266 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.266 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:46.266 { 00:18:46.266 "cntlid": 83, 00:18:46.266 "qid": 0, 00:18:46.266 "state": "enabled", 00:18:46.266 "thread": "nvmf_tgt_poll_group_000", 00:18:46.266 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:46.266 "listen_address": { 00:18:46.266 "trtype": "TCP", 00:18:46.266 "adrfam": "IPv4", 00:18:46.266 "traddr": "10.0.0.2", 00:18:46.266 "trsvcid": "4420" 00:18:46.266 }, 00:18:46.266 "peer_address": { 00:18:46.266 "trtype": "TCP", 00:18:46.266 "adrfam": "IPv4", 00:18:46.266 "traddr": "10.0.0.1", 00:18:46.266 "trsvcid": "54280" 00:18:46.266 }, 00:18:46.266 "auth": { 00:18:46.266 "state": "completed", 00:18:46.266 "digest": "sha384", 00:18:46.266 "dhgroup": "ffdhe6144" 00:18:46.266 } 00:18:46.266 } 00:18:46.266 ]' 00:18:46.266 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:46.266 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:46.266 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:46.527 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:46.527 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:46.527 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.527 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.527 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.788 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: --dhchap-ctrl-secret DHHC-1:02:MTYyYTE1NWJlOGYzYmEwODZiOTk0OWM5MmYzOTNjNmMwNzI1ZDhhMTY3YzQyZWZhttwu1g==: 00:18:46.788 08:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: --dhchap-ctrl-secret DHHC-1:02:MTYyYTE1NWJlOGYzYmEwODZiOTk0OWM5MmYzOTNjNmMwNzI1ZDhhMTY3YzQyZWZhttwu1g==: 00:18:47.359 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.359 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:47.359 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.359 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.359 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.359 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:47.359 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:47.359 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:47.620 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:18:47.620 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:47.620 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:47.620 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:47.620 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:47.620 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.620 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.620 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.620 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.620 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.620 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.620 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.620 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.880 00:18:47.880 08:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:47.880 08:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:47.880 08:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.139 08:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.139 08:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.140 08:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.140 08:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.140 08:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.140 08:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:48.140 { 00:18:48.140 "cntlid": 85, 00:18:48.140 "qid": 0, 00:18:48.140 "state": "enabled", 00:18:48.140 "thread": "nvmf_tgt_poll_group_000", 00:18:48.140 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:48.140 "listen_address": { 00:18:48.140 "trtype": "TCP", 00:18:48.140 "adrfam": "IPv4", 00:18:48.140 "traddr": "10.0.0.2", 00:18:48.140 "trsvcid": "4420" 00:18:48.140 }, 00:18:48.140 "peer_address": { 00:18:48.140 "trtype": "TCP", 00:18:48.140 "adrfam": "IPv4", 00:18:48.140 "traddr": "10.0.0.1", 00:18:48.140 "trsvcid": "44330" 00:18:48.140 }, 00:18:48.140 "auth": { 00:18:48.140 "state": "completed", 00:18:48.140 "digest": "sha384", 00:18:48.140 "dhgroup": "ffdhe6144" 00:18:48.140 } 00:18:48.140 } 00:18:48.140 ]' 00:18:48.140 08:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:48.140 08:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:48.140 08:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:48.140 08:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:48.140 08:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:48.140 08:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.140 08:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.140 08:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.399 08:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:01:OWMwYjkyNmM1MmNhNDZmNDY2NmI3NWExOGU1OWU5NmFFWTPp: 00:18:48.399 08:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:01:OWMwYjkyNmM1MmNhNDZmNDY2NmI3NWExOGU1OWU5NmFFWTPp: 00:18:48.970 08:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.970 08:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:48.970 08:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.970 08:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.970 08:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.970 08:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:48.970 08:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:48.971 08:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:49.232 08:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:18:49.232 08:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:49.232 08:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:49.232 08:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:49.232 08:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:49.232 08:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.232 08:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:49.232 08:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.232 08:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.232 08:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.232 08:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:49.232 08:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:49.232 08:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:49.492 00:18:49.492 08:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:49.492 08:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:49.492 08:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.752 08:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.752 08:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.752 08:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.752 08:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.752 08:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.752 08:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:49.752 { 00:18:49.752 "cntlid": 87, 00:18:49.752 "qid": 0, 00:18:49.752 "state": "enabled", 00:18:49.752 "thread": "nvmf_tgt_poll_group_000", 00:18:49.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:49.752 "listen_address": { 00:18:49.753 "trtype": "TCP", 00:18:49.753 "adrfam": "IPv4", 00:18:49.753 "traddr": "10.0.0.2", 00:18:49.753 "trsvcid": "4420" 00:18:49.753 }, 00:18:49.753 "peer_address": { 00:18:49.753 "trtype": "TCP", 00:18:49.753 "adrfam": "IPv4", 00:18:49.753 "traddr": "10.0.0.1", 00:18:49.753 "trsvcid": "44344" 00:18:49.753 }, 00:18:49.753 "auth": { 00:18:49.753 "state": "completed", 00:18:49.753 "digest": "sha384", 00:18:49.753 "dhgroup": "ffdhe6144" 00:18:49.753 } 00:18:49.753 } 00:18:49.753 ]' 00:18:49.753 08:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:49.753 08:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:49.753 08:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.013 08:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:50.013 08:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:50.013 08:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.013 08:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.013 08:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.013 08:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:18:50.013 08:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:18:50.955 08:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.955 08:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:50.955 08:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.955 08:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.955 08:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.955 08:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:50.955 08:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:50.955 08:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:50.955 08:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:50.955 08:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:18:50.955 08:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:50.955 08:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:50.955 08:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:50.955 08:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:50.955 08:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.955 08:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.955 08:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.955 08:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.955 08:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.955 08:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.955 08:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.955 08:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.526 00:18:51.526 08:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:51.526 08:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:51.526 08:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.527 08:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.527 08:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.527 08:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.527 08:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.527 08:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.527 08:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:51.527 { 00:18:51.527 "cntlid": 89, 00:18:51.527 "qid": 0, 00:18:51.527 "state": "enabled", 00:18:51.527 "thread": "nvmf_tgt_poll_group_000", 00:18:51.527 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:51.527 "listen_address": { 00:18:51.527 "trtype": "TCP", 00:18:51.527 "adrfam": "IPv4", 00:18:51.527 "traddr": "10.0.0.2", 00:18:51.527 "trsvcid": "4420" 00:18:51.527 }, 00:18:51.527 "peer_address": { 00:18:51.527 "trtype": "TCP", 00:18:51.527 "adrfam": "IPv4", 00:18:51.527 "traddr": "10.0.0.1", 00:18:51.527 "trsvcid": "44368" 00:18:51.527 }, 00:18:51.527 "auth": { 00:18:51.527 "state": "completed", 00:18:51.527 "digest": "sha384", 00:18:51.527 "dhgroup": "ffdhe8192" 00:18:51.527 } 00:18:51.527 } 00:18:51.527 ]' 00:18:51.527 08:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:51.527 08:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:51.527 08:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:51.787 08:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:51.787 08:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:51.787 08:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.787 08:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.787 08:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.077 08:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:18:52.077 08:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:18:52.649 08:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.649 08:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:52.649 08:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.649 08:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.649 08:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.649 08:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:52.649 08:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:52.649 08:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:52.649 08:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:18:52.649 08:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:52.649 08:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:52.649 08:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:52.649 08:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:52.910 08:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.910 08:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.910 08:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.910 08:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.910 08:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.910 08:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.910 08:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.910 08:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.171 00:18:53.171 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:53.171 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.171 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:53.432 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.432 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.432 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.432 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.432 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.432 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:53.432 { 00:18:53.432 "cntlid": 91, 00:18:53.432 "qid": 0, 00:18:53.432 "state": "enabled", 00:18:53.432 "thread": "nvmf_tgt_poll_group_000", 00:18:53.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:53.432 "listen_address": { 00:18:53.432 "trtype": "TCP", 00:18:53.432 "adrfam": "IPv4", 00:18:53.432 "traddr": "10.0.0.2", 00:18:53.432 "trsvcid": "4420" 00:18:53.432 }, 00:18:53.432 "peer_address": { 00:18:53.432 "trtype": "TCP", 00:18:53.432 "adrfam": "IPv4", 00:18:53.432 "traddr": "10.0.0.1", 00:18:53.432 "trsvcid": "44396" 00:18:53.432 }, 00:18:53.432 "auth": { 00:18:53.432 "state": "completed", 00:18:53.432 "digest": "sha384", 00:18:53.432 "dhgroup": "ffdhe8192" 00:18:53.432 } 00:18:53.432 } 00:18:53.432 ]' 00:18:53.432 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:53.432 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:53.432 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:53.693 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:53.693 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:53.693 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.693 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.693 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.693 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: --dhchap-ctrl-secret DHHC-1:02:MTYyYTE1NWJlOGYzYmEwODZiOTk0OWM5MmYzOTNjNmMwNzI1ZDhhMTY3YzQyZWZhttwu1g==: 00:18:53.693 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: --dhchap-ctrl-secret DHHC-1:02:MTYyYTE1NWJlOGYzYmEwODZiOTk0OWM5MmYzOTNjNmMwNzI1ZDhhMTY3YzQyZWZhttwu1g==: 00:18:54.636 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.636 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:54.636 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.636 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.636 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.636 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:54.636 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:54.636 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:54.636 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:18:54.636 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.636 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:54.636 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:54.637 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:54.637 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.637 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.637 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.637 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.637 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.637 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.637 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.637 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.207 00:18:55.207 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:55.207 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.207 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:55.207 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.207 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.207 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.207 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.207 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.207 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:55.207 { 00:18:55.207 "cntlid": 93, 00:18:55.207 "qid": 0, 00:18:55.207 "state": "enabled", 00:18:55.207 "thread": "nvmf_tgt_poll_group_000", 00:18:55.207 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:55.207 "listen_address": { 00:18:55.207 "trtype": "TCP", 00:18:55.207 "adrfam": "IPv4", 00:18:55.207 "traddr": "10.0.0.2", 00:18:55.207 "trsvcid": "4420" 00:18:55.207 }, 00:18:55.207 "peer_address": { 00:18:55.207 "trtype": "TCP", 00:18:55.207 "adrfam": "IPv4", 00:18:55.207 "traddr": "10.0.0.1", 00:18:55.207 "trsvcid": "44426" 00:18:55.207 }, 00:18:55.207 "auth": { 00:18:55.207 "state": "completed", 00:18:55.207 "digest": "sha384", 00:18:55.207 "dhgroup": "ffdhe8192" 00:18:55.207 } 00:18:55.207 } 00:18:55.207 ]' 00:18:55.207 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:55.467 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:55.467 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:55.467 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:55.467 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:55.467 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.467 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.467 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.727 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:01:OWMwYjkyNmM1MmNhNDZmNDY2NmI3NWExOGU1OWU5NmFFWTPp: 00:18:55.727 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:01:OWMwYjkyNmM1MmNhNDZmNDY2NmI3NWExOGU1OWU5NmFFWTPp: 00:18:56.299 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.299 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:56.299 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.299 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.299 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.299 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:56.299 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:56.299 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:56.561 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:18:56.561 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:56.562 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:56.562 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:56.562 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:56.562 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.562 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:56.562 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.562 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.562 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.562 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:56.562 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:56.562 08:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:57.133 00:18:57.133 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:57.133 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:57.133 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.133 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.133 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.133 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.133 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.133 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.133 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:57.133 { 00:18:57.133 "cntlid": 95, 00:18:57.133 "qid": 0, 00:18:57.133 "state": "enabled", 00:18:57.133 "thread": "nvmf_tgt_poll_group_000", 00:18:57.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:57.133 "listen_address": { 00:18:57.133 "trtype": "TCP", 00:18:57.133 "adrfam": "IPv4", 00:18:57.133 "traddr": "10.0.0.2", 00:18:57.133 "trsvcid": "4420" 00:18:57.133 }, 00:18:57.133 "peer_address": { 00:18:57.133 "trtype": "TCP", 00:18:57.133 "adrfam": "IPv4", 00:18:57.133 "traddr": "10.0.0.1", 00:18:57.133 "trsvcid": "44458" 00:18:57.133 }, 00:18:57.133 "auth": { 00:18:57.133 "state": "completed", 00:18:57.133 "digest": "sha384", 00:18:57.133 "dhgroup": "ffdhe8192" 00:18:57.133 } 00:18:57.133 } 00:18:57.133 ]' 00:18:57.133 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:57.133 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:57.133 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:57.395 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:57.395 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:57.395 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.395 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.395 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.395 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:18:57.395 08:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:18:58.342 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.342 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:58.342 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.342 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.342 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.342 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:58.342 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:58.342 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:58.342 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:58.342 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:58.342 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:18:58.342 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:58.342 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:58.342 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:58.342 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:58.342 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.342 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.342 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.342 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.342 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.343 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.343 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.343 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.605 00:18:58.605 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:58.605 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:58.605 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.867 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.867 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.867 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.867 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.867 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.867 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:58.867 { 00:18:58.867 "cntlid": 97, 00:18:58.867 "qid": 0, 00:18:58.867 "state": "enabled", 00:18:58.867 "thread": "nvmf_tgt_poll_group_000", 00:18:58.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:58.867 "listen_address": { 00:18:58.867 "trtype": "TCP", 00:18:58.867 "adrfam": "IPv4", 00:18:58.867 "traddr": "10.0.0.2", 00:18:58.867 "trsvcid": "4420" 00:18:58.867 }, 00:18:58.867 "peer_address": { 00:18:58.867 "trtype": "TCP", 00:18:58.867 "adrfam": "IPv4", 00:18:58.867 "traddr": "10.0.0.1", 00:18:58.867 "trsvcid": "33686" 00:18:58.867 }, 00:18:58.867 "auth": { 00:18:58.867 "state": "completed", 00:18:58.867 "digest": "sha512", 00:18:58.867 "dhgroup": "null" 00:18:58.867 } 00:18:58.867 } 00:18:58.867 ]' 00:18:58.867 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:58.867 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:58.867 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:58.867 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:58.867 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:58.867 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.867 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.868 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.128 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:18:59.128 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:18:59.701 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.701 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:59.701 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.701 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.701 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.701 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:59.701 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:59.701 08:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:59.962 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:18:59.962 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:59.962 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:59.962 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:59.962 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:59.962 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.962 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.962 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.962 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.962 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.962 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.963 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.963 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.223 00:19:00.223 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:00.223 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:00.223 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.223 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.223 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.223 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.223 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.485 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.485 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:00.485 { 00:19:00.485 "cntlid": 99, 00:19:00.485 "qid": 0, 00:19:00.485 "state": "enabled", 00:19:00.485 "thread": "nvmf_tgt_poll_group_000", 00:19:00.485 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:00.485 "listen_address": { 00:19:00.485 "trtype": "TCP", 00:19:00.485 "adrfam": "IPv4", 00:19:00.485 "traddr": "10.0.0.2", 00:19:00.485 "trsvcid": "4420" 00:19:00.485 }, 00:19:00.485 "peer_address": { 00:19:00.485 "trtype": "TCP", 00:19:00.485 "adrfam": "IPv4", 00:19:00.485 "traddr": "10.0.0.1", 00:19:00.485 "trsvcid": "33712" 00:19:00.485 }, 00:19:00.485 "auth": { 00:19:00.485 "state": "completed", 00:19:00.485 "digest": "sha512", 00:19:00.485 "dhgroup": "null" 00:19:00.485 } 00:19:00.485 } 00:19:00.485 ]' 00:19:00.485 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:00.485 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:00.485 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:00.486 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:00.486 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:00.486 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.486 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.486 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.747 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: --dhchap-ctrl-secret DHHC-1:02:MTYyYTE1NWJlOGYzYmEwODZiOTk0OWM5MmYzOTNjNmMwNzI1ZDhhMTY3YzQyZWZhttwu1g==: 00:19:00.747 08:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: --dhchap-ctrl-secret DHHC-1:02:MTYyYTE1NWJlOGYzYmEwODZiOTk0OWM5MmYzOTNjNmMwNzI1ZDhhMTY3YzQyZWZhttwu1g==: 00:19:01.317 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.317 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:01.317 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.317 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.317 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.317 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:01.317 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:01.317 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:01.578 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:19:01.578 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:01.578 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:01.578 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:01.578 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:01.578 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.578 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.578 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.578 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.578 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.578 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.578 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.578 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.839 00:19:01.839 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.839 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.839 08:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.100 08:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.100 08:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.100 08:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.100 08:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.100 08:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.100 08:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:02.100 { 00:19:02.100 "cntlid": 101, 00:19:02.100 "qid": 0, 00:19:02.100 "state": "enabled", 00:19:02.100 "thread": "nvmf_tgt_poll_group_000", 00:19:02.100 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:02.100 "listen_address": { 00:19:02.100 "trtype": "TCP", 00:19:02.100 "adrfam": "IPv4", 00:19:02.100 "traddr": "10.0.0.2", 00:19:02.100 "trsvcid": "4420" 00:19:02.100 }, 00:19:02.100 "peer_address": { 00:19:02.100 "trtype": "TCP", 00:19:02.100 "adrfam": "IPv4", 00:19:02.100 "traddr": "10.0.0.1", 00:19:02.100 "trsvcid": "33730" 00:19:02.100 }, 00:19:02.100 "auth": { 00:19:02.100 "state": "completed", 00:19:02.100 "digest": "sha512", 00:19:02.100 "dhgroup": "null" 00:19:02.100 } 00:19:02.100 } 00:19:02.100 ]' 00:19:02.100 08:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:02.100 08:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:02.100 08:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:02.100 08:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:02.100 08:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:02.100 08:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.100 08:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.100 08:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.360 08:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:01:OWMwYjkyNmM1MmNhNDZmNDY2NmI3NWExOGU1OWU5NmFFWTPp: 00:19:02.360 08:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:01:OWMwYjkyNmM1MmNhNDZmNDY2NmI3NWExOGU1OWU5NmFFWTPp: 00:19:02.932 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.932 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:02.932 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.932 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.932 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.932 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:02.932 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:02.932 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:03.193 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:19:03.193 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:03.193 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:03.193 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:03.193 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:03.194 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.194 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:03.194 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.194 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.194 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.194 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:03.194 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:03.194 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:03.454 00:19:03.454 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:03.454 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:03.454 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.714 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.714 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.714 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.714 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.714 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.714 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:03.714 { 00:19:03.714 "cntlid": 103, 00:19:03.714 "qid": 0, 00:19:03.714 "state": "enabled", 00:19:03.714 "thread": "nvmf_tgt_poll_group_000", 00:19:03.714 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:03.714 "listen_address": { 00:19:03.714 "trtype": "TCP", 00:19:03.714 "adrfam": "IPv4", 00:19:03.714 "traddr": "10.0.0.2", 00:19:03.714 "trsvcid": "4420" 00:19:03.714 }, 00:19:03.714 "peer_address": { 00:19:03.714 "trtype": "TCP", 00:19:03.714 "adrfam": "IPv4", 00:19:03.714 "traddr": "10.0.0.1", 00:19:03.714 "trsvcid": "33742" 00:19:03.714 }, 00:19:03.714 "auth": { 00:19:03.714 "state": "completed", 00:19:03.714 "digest": "sha512", 00:19:03.714 "dhgroup": "null" 00:19:03.714 } 00:19:03.714 } 00:19:03.714 ]' 00:19:03.714 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:03.714 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:03.714 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:03.714 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:03.714 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:03.714 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.714 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.714 08:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.975 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:19:03.975 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:19:04.544 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.544 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:04.544 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.544 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.544 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.544 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:04.544 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:04.544 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:04.544 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:04.803 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:19:04.803 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:04.803 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:04.803 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:04.803 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:04.803 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.803 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.803 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.803 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.803 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.804 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.804 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.804 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.063 00:19:05.063 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:05.063 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:05.063 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.323 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.323 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.323 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.323 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.323 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.323 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:05.323 { 00:19:05.323 "cntlid": 105, 00:19:05.323 "qid": 0, 00:19:05.323 "state": "enabled", 00:19:05.323 "thread": "nvmf_tgt_poll_group_000", 00:19:05.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:05.323 "listen_address": { 00:19:05.323 "trtype": "TCP", 00:19:05.323 "adrfam": "IPv4", 00:19:05.323 "traddr": "10.0.0.2", 00:19:05.323 "trsvcid": "4420" 00:19:05.323 }, 00:19:05.323 "peer_address": { 00:19:05.323 "trtype": "TCP", 00:19:05.323 "adrfam": "IPv4", 00:19:05.323 "traddr": "10.0.0.1", 00:19:05.323 "trsvcid": "33770" 00:19:05.323 }, 00:19:05.323 "auth": { 00:19:05.323 "state": "completed", 00:19:05.323 "digest": "sha512", 00:19:05.323 "dhgroup": "ffdhe2048" 00:19:05.323 } 00:19:05.323 } 00:19:05.323 ]' 00:19:05.323 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:05.323 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:05.323 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:05.323 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:05.323 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:05.323 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.323 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.323 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.583 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:19:05.583 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:19:06.154 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.154 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:06.154 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.154 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.154 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.154 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:06.154 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:06.154 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:06.414 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:19:06.414 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:06.414 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:06.414 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:06.414 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:06.414 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.414 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.414 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.414 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.414 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.414 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.414 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.414 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.754 00:19:06.755 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.755 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.755 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.755 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.755 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.755 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.755 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.755 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.755 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.755 { 00:19:06.755 "cntlid": 107, 00:19:06.755 "qid": 0, 00:19:06.755 "state": "enabled", 00:19:06.755 "thread": "nvmf_tgt_poll_group_000", 00:19:06.755 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:06.755 "listen_address": { 00:19:06.755 "trtype": "TCP", 00:19:06.755 "adrfam": "IPv4", 00:19:06.755 "traddr": "10.0.0.2", 00:19:06.755 "trsvcid": "4420" 00:19:06.755 }, 00:19:06.755 "peer_address": { 00:19:06.755 "trtype": "TCP", 00:19:06.755 "adrfam": "IPv4", 00:19:06.755 "traddr": "10.0.0.1", 00:19:06.755 "trsvcid": "33806" 00:19:06.755 }, 00:19:06.755 "auth": { 00:19:06.755 "state": "completed", 00:19:06.755 "digest": "sha512", 00:19:06.755 "dhgroup": "ffdhe2048" 00:19:06.755 } 00:19:06.755 } 00:19:06.755 ]' 00:19:06.755 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:07.041 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:07.041 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:07.041 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:07.041 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:07.041 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.041 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.041 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.041 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: --dhchap-ctrl-secret DHHC-1:02:MTYyYTE1NWJlOGYzYmEwODZiOTk0OWM5MmYzOTNjNmMwNzI1ZDhhMTY3YzQyZWZhttwu1g==: 00:19:07.302 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: --dhchap-ctrl-secret DHHC-1:02:MTYyYTE1NWJlOGYzYmEwODZiOTk0OWM5MmYzOTNjNmMwNzI1ZDhhMTY3YzQyZWZhttwu1g==: 00:19:07.874 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.874 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:07.874 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.874 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.874 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.874 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:07.874 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:07.874 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:07.874 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:19:07.874 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:07.874 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:07.874 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:07.874 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:07.874 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.874 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.874 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.874 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.134 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.134 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.134 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.134 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.134 00:19:08.134 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.134 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.134 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.395 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.395 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.395 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.395 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.395 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.395 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.395 { 00:19:08.395 "cntlid": 109, 00:19:08.395 "qid": 0, 00:19:08.395 "state": "enabled", 00:19:08.395 "thread": "nvmf_tgt_poll_group_000", 00:19:08.395 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:08.395 "listen_address": { 00:19:08.395 "trtype": "TCP", 00:19:08.395 "adrfam": "IPv4", 00:19:08.395 "traddr": "10.0.0.2", 00:19:08.395 "trsvcid": "4420" 00:19:08.395 }, 00:19:08.395 "peer_address": { 00:19:08.395 "trtype": "TCP", 00:19:08.396 "adrfam": "IPv4", 00:19:08.396 "traddr": "10.0.0.1", 00:19:08.396 "trsvcid": "51346" 00:19:08.396 }, 00:19:08.396 "auth": { 00:19:08.396 "state": "completed", 00:19:08.396 "digest": "sha512", 00:19:08.396 "dhgroup": "ffdhe2048" 00:19:08.396 } 00:19:08.396 } 00:19:08.396 ]' 00:19:08.396 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.396 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:08.396 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:08.657 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:08.657 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:08.657 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.657 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.657 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.657 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:01:OWMwYjkyNmM1MmNhNDZmNDY2NmI3NWExOGU1OWU5NmFFWTPp: 00:19:08.657 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:01:OWMwYjkyNmM1MmNhNDZmNDY2NmI3NWExOGU1OWU5NmFFWTPp: 00:19:09.600 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.600 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:09.601 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.601 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.601 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.601 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:09.601 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:09.601 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:09.601 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:19:09.601 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:09.601 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:09.601 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:09.601 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:09.601 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.601 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:09.601 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.601 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.601 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.601 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:09.601 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:09.601 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:09.862 00:19:09.862 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:09.862 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:09.862 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.862 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.862 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.862 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.862 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.862 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.862 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:09.862 { 00:19:09.862 "cntlid": 111, 00:19:09.862 "qid": 0, 00:19:09.862 "state": "enabled", 00:19:09.862 "thread": "nvmf_tgt_poll_group_000", 00:19:09.862 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:09.862 "listen_address": { 00:19:09.862 "trtype": "TCP", 00:19:09.862 "adrfam": "IPv4", 00:19:09.862 "traddr": "10.0.0.2", 00:19:09.862 "trsvcid": "4420" 00:19:09.862 }, 00:19:09.862 "peer_address": { 00:19:09.862 "trtype": "TCP", 00:19:09.862 "adrfam": "IPv4", 00:19:09.862 "traddr": "10.0.0.1", 00:19:09.862 "trsvcid": "51388" 00:19:09.862 }, 00:19:09.862 "auth": { 00:19:09.862 "state": "completed", 00:19:09.862 "digest": "sha512", 00:19:09.862 "dhgroup": "ffdhe2048" 00:19:09.862 } 00:19:09.862 } 00:19:09.862 ]' 00:19:09.862 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.122 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:10.122 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.122 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:10.122 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.122 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.122 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.122 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.383 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:19:10.383 08:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:19:10.956 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.956 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:10.956 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.956 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.957 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.957 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:10.957 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:10.957 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:10.957 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:11.218 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:19:11.218 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:11.218 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:11.218 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:11.218 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:11.218 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.218 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.218 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.218 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.218 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.218 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.218 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.218 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.480 00:19:11.480 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:11.480 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:11.480 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.480 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.480 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.480 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.480 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.480 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.480 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:11.480 { 00:19:11.480 "cntlid": 113, 00:19:11.480 "qid": 0, 00:19:11.480 "state": "enabled", 00:19:11.480 "thread": "nvmf_tgt_poll_group_000", 00:19:11.480 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:11.480 "listen_address": { 00:19:11.480 "trtype": "TCP", 00:19:11.480 "adrfam": "IPv4", 00:19:11.480 "traddr": "10.0.0.2", 00:19:11.480 "trsvcid": "4420" 00:19:11.480 }, 00:19:11.480 "peer_address": { 00:19:11.480 "trtype": "TCP", 00:19:11.480 "adrfam": "IPv4", 00:19:11.480 "traddr": "10.0.0.1", 00:19:11.480 "trsvcid": "51422" 00:19:11.480 }, 00:19:11.480 "auth": { 00:19:11.480 "state": "completed", 00:19:11.480 "digest": "sha512", 00:19:11.480 "dhgroup": "ffdhe3072" 00:19:11.480 } 00:19:11.480 } 00:19:11.480 ]' 00:19:11.742 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:11.742 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:11.742 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:11.742 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:11.742 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:11.742 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.742 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.742 08:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.004 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:19:12.004 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:19:12.576 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.576 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:12.576 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.576 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.576 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.576 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:12.576 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:12.576 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:12.838 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:19:12.838 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:12.838 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:12.838 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:12.838 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:12.838 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.838 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.838 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.838 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.838 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.838 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.838 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.838 08:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.099 00:19:13.099 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:13.099 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:13.099 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.099 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.099 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.099 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.099 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.099 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.361 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:13.361 { 00:19:13.361 "cntlid": 115, 00:19:13.361 "qid": 0, 00:19:13.361 "state": "enabled", 00:19:13.361 "thread": "nvmf_tgt_poll_group_000", 00:19:13.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:13.361 "listen_address": { 00:19:13.361 "trtype": "TCP", 00:19:13.361 "adrfam": "IPv4", 00:19:13.361 "traddr": "10.0.0.2", 00:19:13.361 "trsvcid": "4420" 00:19:13.361 }, 00:19:13.361 "peer_address": { 00:19:13.361 "trtype": "TCP", 00:19:13.361 "adrfam": "IPv4", 00:19:13.361 "traddr": "10.0.0.1", 00:19:13.361 "trsvcid": "51454" 00:19:13.361 }, 00:19:13.361 "auth": { 00:19:13.361 "state": "completed", 00:19:13.361 "digest": "sha512", 00:19:13.361 "dhgroup": "ffdhe3072" 00:19:13.361 } 00:19:13.361 } 00:19:13.361 ]' 00:19:13.361 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.361 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:13.361 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.361 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:13.361 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.361 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.361 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.361 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.633 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: --dhchap-ctrl-secret DHHC-1:02:MTYyYTE1NWJlOGYzYmEwODZiOTk0OWM5MmYzOTNjNmMwNzI1ZDhhMTY3YzQyZWZhttwu1g==: 00:19:13.633 08:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: --dhchap-ctrl-secret DHHC-1:02:MTYyYTE1NWJlOGYzYmEwODZiOTk0OWM5MmYzOTNjNmMwNzI1ZDhhMTY3YzQyZWZhttwu1g==: 00:19:14.203 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.203 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:14.203 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.203 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.203 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.203 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.203 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:14.203 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:14.463 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:19:14.463 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:14.463 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:14.463 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:14.463 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:14.463 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.463 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.463 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.463 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.463 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.463 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.463 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.463 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.723 00:19:14.723 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:14.723 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:14.723 08:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.983 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.983 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.983 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.983 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.983 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.983 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:14.983 { 00:19:14.983 "cntlid": 117, 00:19:14.983 "qid": 0, 00:19:14.983 "state": "enabled", 00:19:14.983 "thread": "nvmf_tgt_poll_group_000", 00:19:14.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:14.983 "listen_address": { 00:19:14.983 "trtype": "TCP", 00:19:14.983 "adrfam": "IPv4", 00:19:14.983 "traddr": "10.0.0.2", 00:19:14.983 "trsvcid": "4420" 00:19:14.983 }, 00:19:14.983 "peer_address": { 00:19:14.983 "trtype": "TCP", 00:19:14.983 "adrfam": "IPv4", 00:19:14.983 "traddr": "10.0.0.1", 00:19:14.983 "trsvcid": "51474" 00:19:14.983 }, 00:19:14.983 "auth": { 00:19:14.983 "state": "completed", 00:19:14.983 "digest": "sha512", 00:19:14.983 "dhgroup": "ffdhe3072" 00:19:14.983 } 00:19:14.983 } 00:19:14.983 ]' 00:19:14.983 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:14.983 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.983 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:14.983 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:14.983 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:14.983 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.983 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.983 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.243 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:01:OWMwYjkyNmM1MmNhNDZmNDY2NmI3NWExOGU1OWU5NmFFWTPp: 00:19:15.243 08:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:01:OWMwYjkyNmM1MmNhNDZmNDY2NmI3NWExOGU1OWU5NmFFWTPp: 00:19:15.815 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.816 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:15.816 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.816 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.816 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.816 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:15.816 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:15.816 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:16.076 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:19:16.076 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.076 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:16.076 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:16.076 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:16.076 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.076 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:16.076 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.076 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.076 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.076 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:16.076 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:16.076 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:16.336 00:19:16.336 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:16.336 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:16.336 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.598 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.598 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.598 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.598 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.598 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.598 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:16.598 { 00:19:16.598 "cntlid": 119, 00:19:16.598 "qid": 0, 00:19:16.598 "state": "enabled", 00:19:16.598 "thread": "nvmf_tgt_poll_group_000", 00:19:16.598 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:16.598 "listen_address": { 00:19:16.598 "trtype": "TCP", 00:19:16.598 "adrfam": "IPv4", 00:19:16.598 "traddr": "10.0.0.2", 00:19:16.598 "trsvcid": "4420" 00:19:16.598 }, 00:19:16.598 "peer_address": { 00:19:16.598 "trtype": "TCP", 00:19:16.598 "adrfam": "IPv4", 00:19:16.598 "traddr": "10.0.0.1", 00:19:16.598 "trsvcid": "51504" 00:19:16.598 }, 00:19:16.598 "auth": { 00:19:16.598 "state": "completed", 00:19:16.598 "digest": "sha512", 00:19:16.598 "dhgroup": "ffdhe3072" 00:19:16.598 } 00:19:16.598 } 00:19:16.598 ]' 00:19:16.598 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:16.598 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:16.598 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:16.598 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:16.598 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:16.598 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.598 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.598 08:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.859 08:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:19:16.859 08:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:19:17.431 08:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.431 08:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:17.431 08:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.431 08:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.431 08:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.431 08:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:17.431 08:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:17.431 08:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:17.431 08:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:17.691 08:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:19:17.691 08:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:17.691 08:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:17.691 08:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:17.691 08:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:17.691 08:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.692 08:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.692 08:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.692 08:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.692 08:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.692 08:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.692 08:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.692 08:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.953 00:19:17.953 08:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:17.953 08:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:17.953 08:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.214 08:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.214 08:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.214 08:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.214 08:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.214 08:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.214 08:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:18.214 { 00:19:18.214 "cntlid": 121, 00:19:18.214 "qid": 0, 00:19:18.214 "state": "enabled", 00:19:18.214 "thread": "nvmf_tgt_poll_group_000", 00:19:18.214 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:18.214 "listen_address": { 00:19:18.214 "trtype": "TCP", 00:19:18.214 "adrfam": "IPv4", 00:19:18.214 "traddr": "10.0.0.2", 00:19:18.214 "trsvcid": "4420" 00:19:18.214 }, 00:19:18.214 "peer_address": { 00:19:18.214 "trtype": "TCP", 00:19:18.214 "adrfam": "IPv4", 00:19:18.214 "traddr": "10.0.0.1", 00:19:18.214 "trsvcid": "53750" 00:19:18.214 }, 00:19:18.214 "auth": { 00:19:18.214 "state": "completed", 00:19:18.214 "digest": "sha512", 00:19:18.214 "dhgroup": "ffdhe4096" 00:19:18.214 } 00:19:18.214 } 00:19:18.214 ]' 00:19:18.214 08:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:18.214 08:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:18.214 08:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:18.214 08:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:18.214 08:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:18.214 08:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.214 08:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.214 08:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.475 08:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:19:18.475 08:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:19:19.047 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.047 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:19.047 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.047 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.047 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.047 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:19.047 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:19.047 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:19.307 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:19:19.307 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:19.307 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:19.307 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:19.307 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:19.307 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.307 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.307 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.307 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.307 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.307 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.307 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.307 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.568 00:19:19.568 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:19.568 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:19.568 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.829 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.829 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.829 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.829 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.829 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.829 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:19.829 { 00:19:19.829 "cntlid": 123, 00:19:19.829 "qid": 0, 00:19:19.829 "state": "enabled", 00:19:19.829 "thread": "nvmf_tgt_poll_group_000", 00:19:19.829 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:19.829 "listen_address": { 00:19:19.829 "trtype": "TCP", 00:19:19.829 "adrfam": "IPv4", 00:19:19.829 "traddr": "10.0.0.2", 00:19:19.829 "trsvcid": "4420" 00:19:19.829 }, 00:19:19.829 "peer_address": { 00:19:19.829 "trtype": "TCP", 00:19:19.829 "adrfam": "IPv4", 00:19:19.829 "traddr": "10.0.0.1", 00:19:19.829 "trsvcid": "53778" 00:19:19.829 }, 00:19:19.829 "auth": { 00:19:19.829 "state": "completed", 00:19:19.829 "digest": "sha512", 00:19:19.829 "dhgroup": "ffdhe4096" 00:19:19.829 } 00:19:19.829 } 00:19:19.829 ]' 00:19:19.829 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:19.829 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:19.829 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:19.829 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:19.829 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:19.829 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.829 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.830 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.091 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: --dhchap-ctrl-secret DHHC-1:02:MTYyYTE1NWJlOGYzYmEwODZiOTk0OWM5MmYzOTNjNmMwNzI1ZDhhMTY3YzQyZWZhttwu1g==: 00:19:20.091 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: --dhchap-ctrl-secret DHHC-1:02:MTYyYTE1NWJlOGYzYmEwODZiOTk0OWM5MmYzOTNjNmMwNzI1ZDhhMTY3YzQyZWZhttwu1g==: 00:19:20.663 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.663 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:20.663 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.663 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.663 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.663 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:20.663 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:20.663 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:20.924 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:19:20.924 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:20.924 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:20.924 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:20.924 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:20.924 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.924 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.924 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.924 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.924 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.924 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.924 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.924 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.185 00:19:21.185 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:21.185 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:21.185 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.445 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.445 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.445 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.445 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.445 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.445 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:21.445 { 00:19:21.445 "cntlid": 125, 00:19:21.445 "qid": 0, 00:19:21.445 "state": "enabled", 00:19:21.445 "thread": "nvmf_tgt_poll_group_000", 00:19:21.445 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:21.445 "listen_address": { 00:19:21.445 "trtype": "TCP", 00:19:21.445 "adrfam": "IPv4", 00:19:21.445 "traddr": "10.0.0.2", 00:19:21.445 "trsvcid": "4420" 00:19:21.445 }, 00:19:21.445 "peer_address": { 00:19:21.445 "trtype": "TCP", 00:19:21.445 "adrfam": "IPv4", 00:19:21.445 "traddr": "10.0.0.1", 00:19:21.445 "trsvcid": "53790" 00:19:21.445 }, 00:19:21.445 "auth": { 00:19:21.445 "state": "completed", 00:19:21.445 "digest": "sha512", 00:19:21.445 "dhgroup": "ffdhe4096" 00:19:21.445 } 00:19:21.445 } 00:19:21.445 ]' 00:19:21.445 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:21.445 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:21.445 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:21.445 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:21.445 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:21.445 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.445 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.445 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.706 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:01:OWMwYjkyNmM1MmNhNDZmNDY2NmI3NWExOGU1OWU5NmFFWTPp: 00:19:21.706 08:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:01:OWMwYjkyNmM1MmNhNDZmNDY2NmI3NWExOGU1OWU5NmFFWTPp: 00:19:22.277 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.277 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:22.277 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.277 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.277 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.277 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:22.277 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:22.277 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:22.538 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:19:22.538 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:22.538 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:22.538 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:22.538 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:22.538 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.538 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:22.538 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.538 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.538 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.538 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:22.538 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:22.538 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:22.799 00:19:22.799 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:22.799 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:22.799 08:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.060 08:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.060 08:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.060 08:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.060 08:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.060 08:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.060 08:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:23.060 { 00:19:23.060 "cntlid": 127, 00:19:23.060 "qid": 0, 00:19:23.060 "state": "enabled", 00:19:23.060 "thread": "nvmf_tgt_poll_group_000", 00:19:23.060 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:23.060 "listen_address": { 00:19:23.060 "trtype": "TCP", 00:19:23.060 "adrfam": "IPv4", 00:19:23.060 "traddr": "10.0.0.2", 00:19:23.060 "trsvcid": "4420" 00:19:23.060 }, 00:19:23.060 "peer_address": { 00:19:23.060 "trtype": "TCP", 00:19:23.060 "adrfam": "IPv4", 00:19:23.060 "traddr": "10.0.0.1", 00:19:23.060 "trsvcid": "53836" 00:19:23.060 }, 00:19:23.060 "auth": { 00:19:23.060 "state": "completed", 00:19:23.060 "digest": "sha512", 00:19:23.060 "dhgroup": "ffdhe4096" 00:19:23.060 } 00:19:23.060 } 00:19:23.060 ]' 00:19:23.060 08:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:23.060 08:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:23.060 08:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:23.060 08:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:23.060 08:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:23.060 08:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.060 08:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.060 08:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.321 08:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:19:23.321 08:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:19:23.893 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.893 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:23.893 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.893 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.893 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.893 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:23.893 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:23.893 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:23.893 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:24.154 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:19:24.154 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:24.154 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:24.154 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:24.154 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:24.154 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.154 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.154 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.154 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.154 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.154 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.154 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.154 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.415 00:19:24.415 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.415 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.415 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.676 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.676 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.676 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.676 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.676 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.676 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:24.676 { 00:19:24.676 "cntlid": 129, 00:19:24.676 "qid": 0, 00:19:24.676 "state": "enabled", 00:19:24.676 "thread": "nvmf_tgt_poll_group_000", 00:19:24.676 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:24.676 "listen_address": { 00:19:24.676 "trtype": "TCP", 00:19:24.676 "adrfam": "IPv4", 00:19:24.676 "traddr": "10.0.0.2", 00:19:24.676 "trsvcid": "4420" 00:19:24.676 }, 00:19:24.676 "peer_address": { 00:19:24.676 "trtype": "TCP", 00:19:24.676 "adrfam": "IPv4", 00:19:24.676 "traddr": "10.0.0.1", 00:19:24.676 "trsvcid": "53856" 00:19:24.676 }, 00:19:24.676 "auth": { 00:19:24.676 "state": "completed", 00:19:24.676 "digest": "sha512", 00:19:24.676 "dhgroup": "ffdhe6144" 00:19:24.676 } 00:19:24.676 } 00:19:24.676 ]' 00:19:24.676 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:24.676 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:24.676 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:24.676 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:24.676 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:24.938 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.938 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.938 08:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.938 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:19:24.938 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:19:25.880 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.880 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:25.880 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.880 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.880 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.880 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:25.880 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:25.880 08:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:25.880 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:19:25.880 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:25.880 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:25.880 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:25.880 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:25.880 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.880 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.880 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.880 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.880 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.880 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.880 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.880 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.141 00:19:26.141 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:26.141 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:26.141 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.401 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.401 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.401 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.401 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.401 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.401 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:26.401 { 00:19:26.401 "cntlid": 131, 00:19:26.401 "qid": 0, 00:19:26.401 "state": "enabled", 00:19:26.401 "thread": "nvmf_tgt_poll_group_000", 00:19:26.401 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:26.401 "listen_address": { 00:19:26.401 "trtype": "TCP", 00:19:26.401 "adrfam": "IPv4", 00:19:26.401 "traddr": "10.0.0.2", 00:19:26.401 "trsvcid": "4420" 00:19:26.401 }, 00:19:26.401 "peer_address": { 00:19:26.401 "trtype": "TCP", 00:19:26.401 "adrfam": "IPv4", 00:19:26.401 "traddr": "10.0.0.1", 00:19:26.401 "trsvcid": "53890" 00:19:26.401 }, 00:19:26.401 "auth": { 00:19:26.401 "state": "completed", 00:19:26.401 "digest": "sha512", 00:19:26.401 "dhgroup": "ffdhe6144" 00:19:26.401 } 00:19:26.401 } 00:19:26.401 ]' 00:19:26.401 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:26.401 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.401 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:26.401 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:26.401 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:26.401 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.401 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.401 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.662 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: --dhchap-ctrl-secret DHHC-1:02:MTYyYTE1NWJlOGYzYmEwODZiOTk0OWM5MmYzOTNjNmMwNzI1ZDhhMTY3YzQyZWZhttwu1g==: 00:19:26.662 08:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: --dhchap-ctrl-secret DHHC-1:02:MTYyYTE1NWJlOGYzYmEwODZiOTk0OWM5MmYzOTNjNmMwNzI1ZDhhMTY3YzQyZWZhttwu1g==: 00:19:27.603 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.604 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:27.604 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.604 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.604 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.604 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:27.604 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:27.604 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:27.604 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:19:27.604 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:27.604 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:27.604 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:27.604 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:27.604 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.604 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.604 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.604 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.604 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.604 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.604 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.604 08:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.864 00:19:27.864 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:27.864 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:27.864 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.125 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.125 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.125 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.125 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.125 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.125 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.125 { 00:19:28.125 "cntlid": 133, 00:19:28.125 "qid": 0, 00:19:28.125 "state": "enabled", 00:19:28.125 "thread": "nvmf_tgt_poll_group_000", 00:19:28.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:28.125 "listen_address": { 00:19:28.125 "trtype": "TCP", 00:19:28.125 "adrfam": "IPv4", 00:19:28.125 "traddr": "10.0.0.2", 00:19:28.125 "trsvcid": "4420" 00:19:28.125 }, 00:19:28.125 "peer_address": { 00:19:28.125 "trtype": "TCP", 00:19:28.125 "adrfam": "IPv4", 00:19:28.125 "traddr": "10.0.0.1", 00:19:28.125 "trsvcid": "35782" 00:19:28.125 }, 00:19:28.125 "auth": { 00:19:28.125 "state": "completed", 00:19:28.125 "digest": "sha512", 00:19:28.125 "dhgroup": "ffdhe6144" 00:19:28.125 } 00:19:28.125 } 00:19:28.125 ]' 00:19:28.125 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:28.125 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:28.125 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:28.125 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:28.125 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:28.125 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.125 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.125 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.385 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:01:OWMwYjkyNmM1MmNhNDZmNDY2NmI3NWExOGU1OWU5NmFFWTPp: 00:19:28.385 08:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:01:OWMwYjkyNmM1MmNhNDZmNDY2NmI3NWExOGU1OWU5NmFFWTPp: 00:19:28.957 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.957 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:28.957 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.957 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.218 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.218 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.218 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:29.218 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:29.218 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:19:29.218 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:29.218 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:29.218 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:29.218 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:29.218 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.218 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:29.218 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.218 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.218 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.219 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:29.219 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:29.219 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:29.479 00:19:29.741 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:29.741 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:29.741 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.741 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.741 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.741 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.741 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.741 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.741 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:29.741 { 00:19:29.741 "cntlid": 135, 00:19:29.741 "qid": 0, 00:19:29.741 "state": "enabled", 00:19:29.741 "thread": "nvmf_tgt_poll_group_000", 00:19:29.741 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:29.741 "listen_address": { 00:19:29.741 "trtype": "TCP", 00:19:29.741 "adrfam": "IPv4", 00:19:29.741 "traddr": "10.0.0.2", 00:19:29.741 "trsvcid": "4420" 00:19:29.741 }, 00:19:29.741 "peer_address": { 00:19:29.741 "trtype": "TCP", 00:19:29.741 "adrfam": "IPv4", 00:19:29.741 "traddr": "10.0.0.1", 00:19:29.741 "trsvcid": "35804" 00:19:29.741 }, 00:19:29.741 "auth": { 00:19:29.741 "state": "completed", 00:19:29.741 "digest": "sha512", 00:19:29.741 "dhgroup": "ffdhe6144" 00:19:29.741 } 00:19:29.741 } 00:19:29.741 ]' 00:19:29.741 08:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:29.741 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:29.741 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:30.001 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:30.001 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:30.001 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.001 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.001 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.001 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:19:30.001 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:19:30.944 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.944 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:30.944 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.944 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.944 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.944 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:30.944 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:30.944 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:30.944 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:30.944 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:19:30.944 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:30.944 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:30.944 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:30.944 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:30.944 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.944 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.944 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.944 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.944 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.944 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.944 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.944 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.516 00:19:31.516 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:31.516 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:31.516 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.516 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.516 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.516 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.516 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.516 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.516 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.516 { 00:19:31.516 "cntlid": 137, 00:19:31.516 "qid": 0, 00:19:31.516 "state": "enabled", 00:19:31.516 "thread": "nvmf_tgt_poll_group_000", 00:19:31.516 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:31.516 "listen_address": { 00:19:31.516 "trtype": "TCP", 00:19:31.516 "adrfam": "IPv4", 00:19:31.516 "traddr": "10.0.0.2", 00:19:31.516 "trsvcid": "4420" 00:19:31.516 }, 00:19:31.516 "peer_address": { 00:19:31.516 "trtype": "TCP", 00:19:31.516 "adrfam": "IPv4", 00:19:31.516 "traddr": "10.0.0.1", 00:19:31.516 "trsvcid": "35848" 00:19:31.516 }, 00:19:31.516 "auth": { 00:19:31.516 "state": "completed", 00:19:31.516 "digest": "sha512", 00:19:31.516 "dhgroup": "ffdhe8192" 00:19:31.516 } 00:19:31.516 } 00:19:31.516 ]' 00:19:31.516 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.516 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:31.777 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.777 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:31.777 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.777 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.777 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.777 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.777 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:19:31.777 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:19:32.720 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.720 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:32.720 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.720 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.720 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.720 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.720 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:32.720 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:32.720 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:19:32.720 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.720 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:32.720 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:32.720 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:32.720 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.720 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.720 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.720 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.720 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.720 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.720 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.720 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.293 00:19:33.293 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:33.293 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:33.293 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.293 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.293 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.293 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.293 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.293 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.293 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.293 { 00:19:33.293 "cntlid": 139, 00:19:33.293 "qid": 0, 00:19:33.293 "state": "enabled", 00:19:33.293 "thread": "nvmf_tgt_poll_group_000", 00:19:33.293 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:33.293 "listen_address": { 00:19:33.293 "trtype": "TCP", 00:19:33.293 "adrfam": "IPv4", 00:19:33.293 "traddr": "10.0.0.2", 00:19:33.293 "trsvcid": "4420" 00:19:33.293 }, 00:19:33.293 "peer_address": { 00:19:33.293 "trtype": "TCP", 00:19:33.293 "adrfam": "IPv4", 00:19:33.293 "traddr": "10.0.0.1", 00:19:33.293 "trsvcid": "35876" 00:19:33.293 }, 00:19:33.293 "auth": { 00:19:33.293 "state": "completed", 00:19:33.293 "digest": "sha512", 00:19:33.293 "dhgroup": "ffdhe8192" 00:19:33.293 } 00:19:33.293 } 00:19:33.293 ]' 00:19:33.293 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.554 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:33.554 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:33.554 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:33.554 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.554 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.554 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.554 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.815 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: --dhchap-ctrl-secret DHHC-1:02:MTYyYTE1NWJlOGYzYmEwODZiOTk0OWM5MmYzOTNjNmMwNzI1ZDhhMTY3YzQyZWZhttwu1g==: 00:19:33.815 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: --dhchap-ctrl-secret DHHC-1:02:MTYyYTE1NWJlOGYzYmEwODZiOTk0OWM5MmYzOTNjNmMwNzI1ZDhhMTY3YzQyZWZhttwu1g==: 00:19:34.385 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.385 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:34.385 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.385 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.385 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.385 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.385 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:34.385 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:34.645 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:19:34.645 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:34.645 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:34.645 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:34.645 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:34.645 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.645 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.645 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.645 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.645 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.645 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.645 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.645 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.215 00:19:35.215 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.215 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.215 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.215 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.215 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.215 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.215 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.215 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.215 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.215 { 00:19:35.215 "cntlid": 141, 00:19:35.215 "qid": 0, 00:19:35.215 "state": "enabled", 00:19:35.215 "thread": "nvmf_tgt_poll_group_000", 00:19:35.215 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:35.215 "listen_address": { 00:19:35.215 "trtype": "TCP", 00:19:35.215 "adrfam": "IPv4", 00:19:35.215 "traddr": "10.0.0.2", 00:19:35.215 "trsvcid": "4420" 00:19:35.215 }, 00:19:35.215 "peer_address": { 00:19:35.215 "trtype": "TCP", 00:19:35.215 "adrfam": "IPv4", 00:19:35.215 "traddr": "10.0.0.1", 00:19:35.215 "trsvcid": "35890" 00:19:35.215 }, 00:19:35.215 "auth": { 00:19:35.215 "state": "completed", 00:19:35.215 "digest": "sha512", 00:19:35.215 "dhgroup": "ffdhe8192" 00:19:35.215 } 00:19:35.215 } 00:19:35.215 ]' 00:19:35.215 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.215 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:35.475 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.476 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:35.476 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:35.476 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.476 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.476 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.736 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:01:OWMwYjkyNmM1MmNhNDZmNDY2NmI3NWExOGU1OWU5NmFFWTPp: 00:19:35.736 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:01:OWMwYjkyNmM1MmNhNDZmNDY2NmI3NWExOGU1OWU5NmFFWTPp: 00:19:36.308 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.308 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:36.308 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.308 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.308 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.308 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:36.308 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:36.308 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:36.569 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:19:36.569 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:36.569 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:36.569 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:36.569 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:36.569 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.569 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:36.569 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.569 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.569 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.569 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:36.569 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:36.569 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:37.139 00:19:37.139 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.139 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.139 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.139 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.139 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.139 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.139 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.139 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.139 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:37.139 { 00:19:37.139 "cntlid": 143, 00:19:37.139 "qid": 0, 00:19:37.139 "state": "enabled", 00:19:37.139 "thread": "nvmf_tgt_poll_group_000", 00:19:37.139 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:37.139 "listen_address": { 00:19:37.139 "trtype": "TCP", 00:19:37.139 "adrfam": "IPv4", 00:19:37.139 "traddr": "10.0.0.2", 00:19:37.139 "trsvcid": "4420" 00:19:37.139 }, 00:19:37.139 "peer_address": { 00:19:37.139 "trtype": "TCP", 00:19:37.139 "adrfam": "IPv4", 00:19:37.139 "traddr": "10.0.0.1", 00:19:37.139 "trsvcid": "35922" 00:19:37.139 }, 00:19:37.139 "auth": { 00:19:37.139 "state": "completed", 00:19:37.139 "digest": "sha512", 00:19:37.139 "dhgroup": "ffdhe8192" 00:19:37.139 } 00:19:37.139 } 00:19:37.139 ]' 00:19:37.139 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:37.139 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:37.139 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:37.401 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:37.401 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:37.401 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.401 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.401 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.661 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:19:37.661 08:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:19:38.231 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.231 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:38.231 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.231 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.231 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.231 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:38.231 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:19:38.231 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:38.231 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:38.231 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:38.231 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:38.492 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:19:38.492 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:38.492 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:38.492 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:38.492 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:38.493 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.493 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.493 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.493 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.493 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.493 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.493 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.493 08:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.753 00:19:39.013 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:39.013 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:39.013 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.013 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.013 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.013 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.013 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.013 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.013 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.013 { 00:19:39.013 "cntlid": 145, 00:19:39.013 "qid": 0, 00:19:39.013 "state": "enabled", 00:19:39.013 "thread": "nvmf_tgt_poll_group_000", 00:19:39.013 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:39.013 "listen_address": { 00:19:39.013 "trtype": "TCP", 00:19:39.013 "adrfam": "IPv4", 00:19:39.013 "traddr": "10.0.0.2", 00:19:39.013 "trsvcid": "4420" 00:19:39.013 }, 00:19:39.013 "peer_address": { 00:19:39.013 "trtype": "TCP", 00:19:39.013 "adrfam": "IPv4", 00:19:39.013 "traddr": "10.0.0.1", 00:19:39.013 "trsvcid": "34214" 00:19:39.013 }, 00:19:39.013 "auth": { 00:19:39.013 "state": "completed", 00:19:39.013 "digest": "sha512", 00:19:39.013 "dhgroup": "ffdhe8192" 00:19:39.013 } 00:19:39.013 } 00:19:39.013 ]' 00:19:39.013 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:39.273 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:39.273 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:39.273 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:39.273 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:39.273 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.273 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.273 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.534 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:19:39.534 08:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NTc5ZjNmMjNlZGU1MjhjNjljZDYwNmMyYTAyOTgyMDU3MWEwZTc5ZmU3MzA5MWFjJ3TE1g==: --dhchap-ctrl-secret DHHC-1:03:NTEwMDlkZDI3ZTEwZmEwZGZjODQ1ZTRlYzI5N2Y4M2UxYTI0MmY4ZDg1ZWUzZjhhM2VmZDJjZTdhODhhYWZhMcyDYfc=: 00:19:40.106 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.106 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:40.106 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.106 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.106 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.106 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:40.106 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.106 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.106 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.106 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:19:40.106 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:40.106 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:19:40.106 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:40.106 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:40.106 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:40.107 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:40.107 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:19:40.107 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:40.107 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:40.680 request: 00:19:40.680 { 00:19:40.680 "name": "nvme0", 00:19:40.680 "trtype": "tcp", 00:19:40.680 "traddr": "10.0.0.2", 00:19:40.680 "adrfam": "ipv4", 00:19:40.680 "trsvcid": "4420", 00:19:40.680 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:40.680 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:40.680 "prchk_reftag": false, 00:19:40.680 "prchk_guard": false, 00:19:40.680 "hdgst": false, 00:19:40.680 "ddgst": false, 00:19:40.680 "dhchap_key": "key2", 00:19:40.680 "allow_unrecognized_csi": false, 00:19:40.680 "method": "bdev_nvme_attach_controller", 00:19:40.680 "req_id": 1 00:19:40.680 } 00:19:40.680 Got JSON-RPC error response 00:19:40.680 response: 00:19:40.680 { 00:19:40.680 "code": -5, 00:19:40.680 "message": "Input/output error" 00:19:40.680 } 00:19:40.680 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:40.680 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:40.680 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:40.680 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:40.680 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:40.680 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.680 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.680 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.680 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.680 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.680 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.680 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.680 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:40.680 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:40.680 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:40.680 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:40.680 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:40.680 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:40.680 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:40.680 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:40.680 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:40.680 08:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:40.943 request: 00:19:40.943 { 00:19:40.943 "name": "nvme0", 00:19:40.943 "trtype": "tcp", 00:19:40.943 "traddr": "10.0.0.2", 00:19:40.943 "adrfam": "ipv4", 00:19:40.943 "trsvcid": "4420", 00:19:40.943 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:40.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:40.943 "prchk_reftag": false, 00:19:40.943 "prchk_guard": false, 00:19:40.943 "hdgst": false, 00:19:40.943 "ddgst": false, 00:19:40.943 "dhchap_key": "key1", 00:19:40.943 "dhchap_ctrlr_key": "ckey2", 00:19:40.943 "allow_unrecognized_csi": false, 00:19:40.943 "method": "bdev_nvme_attach_controller", 00:19:40.943 "req_id": 1 00:19:40.943 } 00:19:40.943 Got JSON-RPC error response 00:19:40.943 response: 00:19:40.943 { 00:19:40.943 "code": -5, 00:19:40.943 "message": "Input/output error" 00:19:40.943 } 00:19:40.943 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:40.943 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:40.943 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:40.943 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:40.943 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:40.943 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.943 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.943 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.943 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:40.943 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.943 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.943 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.943 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.943 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:40.943 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.943 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:40.943 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:40.943 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:40.943 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:40.943 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.943 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.943 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.515 request: 00:19:41.515 { 00:19:41.515 "name": "nvme0", 00:19:41.515 "trtype": "tcp", 00:19:41.515 "traddr": "10.0.0.2", 00:19:41.515 "adrfam": "ipv4", 00:19:41.515 "trsvcid": "4420", 00:19:41.515 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:41.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:41.515 "prchk_reftag": false, 00:19:41.515 "prchk_guard": false, 00:19:41.515 "hdgst": false, 00:19:41.515 "ddgst": false, 00:19:41.515 "dhchap_key": "key1", 00:19:41.515 "dhchap_ctrlr_key": "ckey1", 00:19:41.515 "allow_unrecognized_csi": false, 00:19:41.515 "method": "bdev_nvme_attach_controller", 00:19:41.515 "req_id": 1 00:19:41.515 } 00:19:41.515 Got JSON-RPC error response 00:19:41.515 response: 00:19:41.515 { 00:19:41.515 "code": -5, 00:19:41.515 "message": "Input/output error" 00:19:41.515 } 00:19:41.515 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:41.515 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:41.515 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:41.515 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:41.515 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:41.515 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.515 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.515 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.515 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1942635 00:19:41.515 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1942635 ']' 00:19:41.515 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1942635 00:19:41.515 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:41.515 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:41.515 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1942635 00:19:41.515 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:41.515 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:41.515 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1942635' 00:19:41.515 killing process with pid 1942635 00:19:41.515 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1942635 00:19:41.515 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1942635 00:19:41.776 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:41.776 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:41.776 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:41.776 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.776 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1968829 00:19:41.776 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1968829 00:19:41.776 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:41.776 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1968829 ']' 00:19:41.776 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.776 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:41.776 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.776 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:41.776 08:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.718 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:42.718 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:42.718 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:42.718 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:42.718 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.718 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:42.718 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:42.718 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1968829 00:19:42.718 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1968829 ']' 00:19:42.718 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.718 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:42.718 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.718 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:42.718 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.718 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:42.718 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:42.718 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:19:42.718 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.718 08:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.718 null0 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.n5R 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.WML ]] 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.WML 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.f72 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.U19 ]] 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.U19 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.vdN 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.6Vp ]] 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6Vp 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.5UZ 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:42.978 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:43.920 nvme0n1 00:19:43.920 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:43.920 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:43.920 08:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.920 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.920 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.920 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.920 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.920 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.920 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:43.920 { 00:19:43.920 "cntlid": 1, 00:19:43.920 "qid": 0, 00:19:43.920 "state": "enabled", 00:19:43.920 "thread": "nvmf_tgt_poll_group_000", 00:19:43.920 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:43.920 "listen_address": { 00:19:43.920 "trtype": "TCP", 00:19:43.920 "adrfam": "IPv4", 00:19:43.920 "traddr": "10.0.0.2", 00:19:43.920 "trsvcid": "4420" 00:19:43.920 }, 00:19:43.920 "peer_address": { 00:19:43.920 "trtype": "TCP", 00:19:43.920 "adrfam": "IPv4", 00:19:43.920 "traddr": "10.0.0.1", 00:19:43.920 "trsvcid": "34288" 00:19:43.920 }, 00:19:43.920 "auth": { 00:19:43.920 "state": "completed", 00:19:43.921 "digest": "sha512", 00:19:43.921 "dhgroup": "ffdhe8192" 00:19:43.921 } 00:19:43.921 } 00:19:43.921 ]' 00:19:43.921 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:43.921 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:43.921 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:43.921 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:43.921 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:43.921 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.921 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.921 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.182 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:19:44.182 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:19:44.755 08:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.016 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:45.016 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.016 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.016 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.016 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:45.016 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.016 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.016 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.016 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:45.016 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:45.016 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:45.016 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:45.016 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:45.016 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:45.016 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:45.016 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:45.016 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:45.016 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:45.016 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:45.016 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:45.278 request: 00:19:45.278 { 00:19:45.278 "name": "nvme0", 00:19:45.278 "trtype": "tcp", 00:19:45.278 "traddr": "10.0.0.2", 00:19:45.278 "adrfam": "ipv4", 00:19:45.278 "trsvcid": "4420", 00:19:45.278 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:45.278 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:45.278 "prchk_reftag": false, 00:19:45.278 "prchk_guard": false, 00:19:45.278 "hdgst": false, 00:19:45.278 "ddgst": false, 00:19:45.278 "dhchap_key": "key3", 00:19:45.278 "allow_unrecognized_csi": false, 00:19:45.278 "method": "bdev_nvme_attach_controller", 00:19:45.278 "req_id": 1 00:19:45.278 } 00:19:45.278 Got JSON-RPC error response 00:19:45.278 response: 00:19:45.278 { 00:19:45.278 "code": -5, 00:19:45.278 "message": "Input/output error" 00:19:45.278 } 00:19:45.278 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:45.278 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:45.278 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:45.278 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:45.278 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:19:45.278 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:19:45.278 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:45.278 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:45.616 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:45.616 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:45.616 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:45.616 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:45.616 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:45.616 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:45.616 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:45.616 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:45.616 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:45.616 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:45.616 request: 00:19:45.616 { 00:19:45.616 "name": "nvme0", 00:19:45.616 "trtype": "tcp", 00:19:45.616 "traddr": "10.0.0.2", 00:19:45.616 "adrfam": "ipv4", 00:19:45.616 "trsvcid": "4420", 00:19:45.616 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:45.616 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:45.616 "prchk_reftag": false, 00:19:45.616 "prchk_guard": false, 00:19:45.616 "hdgst": false, 00:19:45.616 "ddgst": false, 00:19:45.616 "dhchap_key": "key3", 00:19:45.616 "allow_unrecognized_csi": false, 00:19:45.616 "method": "bdev_nvme_attach_controller", 00:19:45.616 "req_id": 1 00:19:45.616 } 00:19:45.616 Got JSON-RPC error response 00:19:45.616 response: 00:19:45.616 { 00:19:45.616 "code": -5, 00:19:45.616 "message": "Input/output error" 00:19:45.616 } 00:19:45.616 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:45.616 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:45.617 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:45.617 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:45.617 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:45.617 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:19:45.617 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:45.617 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:45.617 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:45.617 08:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:45.901 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:45.901 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.901 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.901 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.901 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:45.901 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.901 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.901 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.901 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:45.901 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:45.901 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:45.901 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:45.901 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:45.901 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:45.901 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:45.901 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:45.901 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:45.901 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:46.190 request: 00:19:46.190 { 00:19:46.190 "name": "nvme0", 00:19:46.190 "trtype": "tcp", 00:19:46.190 "traddr": "10.0.0.2", 00:19:46.190 "adrfam": "ipv4", 00:19:46.190 "trsvcid": "4420", 00:19:46.190 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:46.190 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:46.190 "prchk_reftag": false, 00:19:46.190 "prchk_guard": false, 00:19:46.190 "hdgst": false, 00:19:46.190 "ddgst": false, 00:19:46.190 "dhchap_key": "key0", 00:19:46.190 "dhchap_ctrlr_key": "key1", 00:19:46.190 "allow_unrecognized_csi": false, 00:19:46.190 "method": "bdev_nvme_attach_controller", 00:19:46.190 "req_id": 1 00:19:46.190 } 00:19:46.190 Got JSON-RPC error response 00:19:46.190 response: 00:19:46.190 { 00:19:46.190 "code": -5, 00:19:46.191 "message": "Input/output error" 00:19:46.191 } 00:19:46.191 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:46.191 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:46.191 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:46.191 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:46.191 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:19:46.191 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:46.191 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:46.452 nvme0n1 00:19:46.452 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:19:46.452 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:19:46.452 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.712 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.712 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.712 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.712 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:46.712 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.712 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.973 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.973 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:46.973 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:46.973 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:47.544 nvme0n1 00:19:47.544 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:19:47.544 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.544 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:19:47.805 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.805 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:47.805 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.805 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.805 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.805 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:19:47.805 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:19:47.805 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.066 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.066 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:19:48.066 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: --dhchap-ctrl-secret DHHC-1:03:Y2M0ODA0ZjY5NWVhMzNkNDZlMDdjYjgxOTExMDQ2Njc1NjUwYzI0ZjI0OTMxMTVhODllNmZkNjk5M2Y3ODQxZEb/FhU=: 00:19:48.638 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:19:48.638 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:19:48.638 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:19:48.638 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:19:48.638 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:19:48.638 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:19:48.638 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:19:48.638 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.638 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.898 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:19:48.898 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:48.898 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:19:48.898 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:48.898 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:48.898 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:48.898 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:48.898 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:48.898 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:48.898 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:49.160 request: 00:19:49.160 { 00:19:49.160 "name": "nvme0", 00:19:49.160 "trtype": "tcp", 00:19:49.160 "traddr": "10.0.0.2", 00:19:49.161 "adrfam": "ipv4", 00:19:49.161 "trsvcid": "4420", 00:19:49.161 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:49.161 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:49.161 "prchk_reftag": false, 00:19:49.161 "prchk_guard": false, 00:19:49.161 "hdgst": false, 00:19:49.161 "ddgst": false, 00:19:49.161 "dhchap_key": "key1", 00:19:49.161 "allow_unrecognized_csi": false, 00:19:49.161 "method": "bdev_nvme_attach_controller", 00:19:49.161 "req_id": 1 00:19:49.161 } 00:19:49.161 Got JSON-RPC error response 00:19:49.161 response: 00:19:49.161 { 00:19:49.161 "code": -5, 00:19:49.161 "message": "Input/output error" 00:19:49.161 } 00:19:49.161 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:49.161 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:49.161 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:49.161 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:49.161 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:49.161 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:49.161 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:50.104 nvme0n1 00:19:50.104 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:19:50.104 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:19:50.104 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.104 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.104 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.104 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.366 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:50.366 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.366 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.366 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.366 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:19:50.366 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:50.366 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:50.626 nvme0n1 00:19:50.626 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:19:50.626 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:19:50.626 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.886 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.886 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.886 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.887 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:50.887 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.887 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.887 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.887 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: '' 2s 00:19:50.887 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:50.887 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:50.887 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: 00:19:50.887 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:19:50.887 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:50.887 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:50.887 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: ]] 00:19:50.887 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MTU0ODE4Njk0MTAzZTFkNDA3NWRhNzFhMzk2Nzc0OTA1Rjjk: 00:19:50.887 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:19:50.887 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:50.887 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:53.432 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:19:53.432 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:53.432 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:53.432 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:53.432 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:53.432 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:53.432 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:53.432 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:19:53.432 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.432 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.432 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.432 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: 2s 00:19:53.432 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:53.432 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:53.432 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:19:53.432 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: 00:19:53.432 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:53.432 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:53.432 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:19:53.432 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: ]] 00:19:53.433 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MDk2Y2IwMzc5YzJiN2YyMjRhMmUyMzQyMzA3MjhmZTI0ZWM5NjFjZGUyZTE2ZDQ0MGJumQ==: 00:19:53.433 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:53.433 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:55.346 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:19:55.346 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:55.346 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:55.346 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:55.346 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:55.346 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:55.346 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:55.346 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.346 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:55.346 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.346 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.346 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.346 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:55.346 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:55.346 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:55.916 nvme0n1 00:19:55.916 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:55.916 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.916 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.916 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.916 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:55.916 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:56.486 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:19:56.486 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:19:56.486 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.486 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.486 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:56.486 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.486 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.486 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.486 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:19:56.486 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:19:56.746 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:19:56.746 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:19:56.746 08:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.007 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.007 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:57.007 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.007 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.007 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.007 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:57.007 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:57.007 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:57.007 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:57.007 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:57.007 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:57.007 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:57.007 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:57.007 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:57.268 request: 00:19:57.268 { 00:19:57.268 "name": "nvme0", 00:19:57.268 "dhchap_key": "key1", 00:19:57.268 "dhchap_ctrlr_key": "key3", 00:19:57.268 "method": "bdev_nvme_set_keys", 00:19:57.268 "req_id": 1 00:19:57.268 } 00:19:57.268 Got JSON-RPC error response 00:19:57.268 response: 00:19:57.268 { 00:19:57.268 "code": -13, 00:19:57.268 "message": "Permission denied" 00:19:57.268 } 00:19:57.529 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:57.529 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:57.529 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:57.529 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:57.529 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:57.529 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.529 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:57.529 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:19:57.529 08:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:19:58.470 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:58.470 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:58.470 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.733 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:19:58.733 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:58.733 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.733 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.733 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.733 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:58.733 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:58.733 08:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:59.678 nvme0n1 00:19:59.678 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:59.678 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.678 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.678 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.678 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:59.678 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:59.678 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:59.678 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:59.678 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:59.678 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:59.678 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:59.678 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:59.678 08:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:59.940 request: 00:19:59.940 { 00:19:59.940 "name": "nvme0", 00:19:59.940 "dhchap_key": "key2", 00:19:59.940 "dhchap_ctrlr_key": "key0", 00:19:59.940 "method": "bdev_nvme_set_keys", 00:19:59.940 "req_id": 1 00:19:59.940 } 00:19:59.940 Got JSON-RPC error response 00:19:59.940 response: 00:19:59.940 { 00:19:59.940 "code": -13, 00:19:59.940 "message": "Permission denied" 00:19:59.940 } 00:19:59.940 08:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:59.940 08:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:59.940 08:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:59.940 08:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:59.940 08:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:59.940 08:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:59.940 08:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.201 08:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:20:00.201 08:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:20:01.141 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:20:01.141 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:20:01.141 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.400 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:20:01.400 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:20:01.400 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:20:01.400 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1942742 00:20:01.400 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1942742 ']' 00:20:01.400 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1942742 00:20:01.400 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:01.400 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:01.400 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1942742 00:20:01.400 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:01.400 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:01.400 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1942742' 00:20:01.400 killing process with pid 1942742 00:20:01.400 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1942742 00:20:01.400 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1942742 00:20:01.660 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:01.660 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:01.660 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:20:01.660 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:01.660 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:20:01.660 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:01.660 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:01.660 rmmod nvme_tcp 00:20:01.660 rmmod nvme_fabrics 00:20:01.660 rmmod nvme_keyring 00:20:01.660 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:01.660 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:20:01.660 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:20:01.660 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1968829 ']' 00:20:01.660 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1968829 00:20:01.660 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1968829 ']' 00:20:01.660 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1968829 00:20:01.660 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:01.660 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:01.660 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1968829 00:20:01.920 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:01.920 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:01.920 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1968829' 00:20:01.920 killing process with pid 1968829 00:20:01.920 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1968829 00:20:01.920 08:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1968829 00:20:01.920 08:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:01.920 08:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:01.920 08:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:01.920 08:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:20:01.920 08:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:20:01.920 08:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:01.920 08:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:20:01.920 08:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:01.920 08:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:01.920 08:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.920 08:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:01.920 08:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.n5R /tmp/spdk.key-sha256.f72 /tmp/spdk.key-sha384.vdN /tmp/spdk.key-sha512.5UZ /tmp/spdk.key-sha512.WML /tmp/spdk.key-sha384.U19 /tmp/spdk.key-sha256.6Vp '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:04.461 00:20:04.461 real 2m37.064s 00:20:04.461 user 5m53.135s 00:20:04.461 sys 0m25.117s 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.461 ************************************ 00:20:04.461 END TEST nvmf_auth_target 00:20:04.461 ************************************ 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:04.461 ************************************ 00:20:04.461 START TEST nvmf_bdevio_no_huge 00:20:04.461 ************************************ 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:04.461 * Looking for test storage... 00:20:04.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:04.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.461 --rc genhtml_branch_coverage=1 00:20:04.461 --rc genhtml_function_coverage=1 00:20:04.461 --rc genhtml_legend=1 00:20:04.461 --rc geninfo_all_blocks=1 00:20:04.461 --rc geninfo_unexecuted_blocks=1 00:20:04.461 00:20:04.461 ' 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:04.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.461 --rc genhtml_branch_coverage=1 00:20:04.461 --rc genhtml_function_coverage=1 00:20:04.461 --rc genhtml_legend=1 00:20:04.461 --rc geninfo_all_blocks=1 00:20:04.461 --rc geninfo_unexecuted_blocks=1 00:20:04.461 00:20:04.461 ' 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:04.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.461 --rc genhtml_branch_coverage=1 00:20:04.461 --rc genhtml_function_coverage=1 00:20:04.461 --rc genhtml_legend=1 00:20:04.461 --rc geninfo_all_blocks=1 00:20:04.461 --rc geninfo_unexecuted_blocks=1 00:20:04.461 00:20:04.461 ' 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:04.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:04.461 --rc genhtml_branch_coverage=1 00:20:04.461 --rc genhtml_function_coverage=1 00:20:04.461 --rc genhtml_legend=1 00:20:04.461 --rc geninfo_all_blocks=1 00:20:04.461 --rc geninfo_unexecuted_blocks=1 00:20:04.461 00:20:04.461 ' 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:04.461 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:04.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:20:04.462 08:19:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:12.604 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:12.604 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:20:12.604 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:12.604 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:12.604 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:12.604 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:12.604 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:12.604 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:20:12.604 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:12.604 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:20:12.604 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:20:12.604 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:20:12.604 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:20:12.604 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:20:12.604 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:20:12.604 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:12.604 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:12.605 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:12.605 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:12.605 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:12.605 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:12.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:12.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.712 ms 00:20:12.605 00:20:12.605 --- 10.0.0.2 ping statistics --- 00:20:12.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.605 rtt min/avg/max/mdev = 0.712/0.712/0.712/0.000 ms 00:20:12.605 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:12.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:12.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:20:12.605 00:20:12.605 --- 10.0.0.1 ping statistics --- 00:20:12.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.605 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:20:12.605 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:12.605 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:20:12.605 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:12.605 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:12.605 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:12.605 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:12.605 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:12.605 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:12.605 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:12.605 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:12.605 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:12.605 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:12.605 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:12.606 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1977206 00:20:12.606 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1977206 00:20:12.606 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:12.606 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1977206 ']' 00:20:12.606 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.606 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:12.606 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.606 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:12.606 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:12.606 [2024-11-28 08:19:09.118894] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:20:12.606 [2024-11-28 08:19:09.118962] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:12.606 [2024-11-28 08:19:09.225988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:12.606 [2024-11-28 08:19:09.286350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:12.606 [2024-11-28 08:19:09.286397] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:12.606 [2024-11-28 08:19:09.286406] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:12.606 [2024-11-28 08:19:09.286413] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:12.606 [2024-11-28 08:19:09.286420] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:12.606 [2024-11-28 08:19:09.287926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:12.606 [2024-11-28 08:19:09.288150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:20:12.606 [2024-11-28 08:19:09.288390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:20:12.606 [2024-11-28 08:19:09.288576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:12.868 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:12.868 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:20:12.868 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:12.868 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:12.868 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:12.868 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.868 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:12.868 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.868 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:12.868 [2024-11-28 08:19:10.006067] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.868 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.868 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:12.868 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.868 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:12.868 Malloc0 00:20:12.868 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.868 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:12.868 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.868 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:12.868 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.868 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:12.868 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.868 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:12.868 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.868 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:12.868 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.868 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:12.868 [2024-11-28 08:19:10.062613] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:12.868 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.868 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:12.868 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:12.868 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:20:12.868 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:20:12.868 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:12.868 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:12.868 { 00:20:12.868 "params": { 00:20:12.868 "name": "Nvme$subsystem", 00:20:12.868 "trtype": "$TEST_TRANSPORT", 00:20:12.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:12.868 "adrfam": "ipv4", 00:20:12.868 "trsvcid": "$NVMF_PORT", 00:20:12.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:12.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:12.868 "hdgst": ${hdgst:-false}, 00:20:12.868 "ddgst": ${ddgst:-false} 00:20:12.868 }, 00:20:12.868 "method": "bdev_nvme_attach_controller" 00:20:12.868 } 00:20:12.868 EOF 00:20:12.868 )") 00:20:12.868 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:20:12.868 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:20:12.868 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:20:12.868 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:12.868 "params": { 00:20:12.868 "name": "Nvme1", 00:20:12.868 "trtype": "tcp", 00:20:12.868 "traddr": "10.0.0.2", 00:20:12.868 "adrfam": "ipv4", 00:20:12.868 "trsvcid": "4420", 00:20:12.868 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.868 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:12.868 "hdgst": false, 00:20:12.868 "ddgst": false 00:20:12.868 }, 00:20:12.868 "method": "bdev_nvme_attach_controller" 00:20:12.868 }' 00:20:12.868 [2024-11-28 08:19:10.122674] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:20:12.868 [2024-11-28 08:19:10.122754] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1977265 ] 00:20:13.129 [2024-11-28 08:19:10.222537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:13.129 [2024-11-28 08:19:10.283246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.129 [2024-11-28 08:19:10.283413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:13.129 [2024-11-28 08:19:10.283415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.390 I/O targets: 00:20:13.390 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:13.390 00:20:13.390 00:20:13.390 CUnit - A unit testing framework for C - Version 2.1-3 00:20:13.390 http://cunit.sourceforge.net/ 00:20:13.390 00:20:13.390 00:20:13.390 Suite: bdevio tests on: Nvme1n1 00:20:13.390 Test: blockdev write read block ...passed 00:20:13.390 Test: blockdev write zeroes read block ...passed 00:20:13.390 Test: blockdev write zeroes read no split ...passed 00:20:13.390 Test: blockdev write zeroes read split ...passed 00:20:13.652 Test: blockdev write zeroes read split partial ...passed 00:20:13.652 Test: blockdev reset ...[2024-11-28 08:19:10.689761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:13.652 [2024-11-28 08:19:10.689862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14c9810 (9): Bad file descriptor 00:20:13.652 [2024-11-28 08:19:10.702150] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:20:13.652 passed 00:20:13.652 Test: blockdev write read 8 blocks ...passed 00:20:13.652 Test: blockdev write read size > 128k ...passed 00:20:13.652 Test: blockdev write read invalid size ...passed 00:20:13.652 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:13.652 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:13.652 Test: blockdev write read max offset ...passed 00:20:13.652 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:13.652 Test: blockdev writev readv 8 blocks ...passed 00:20:13.652 Test: blockdev writev readv 30 x 1block ...passed 00:20:13.652 Test: blockdev writev readv block ...passed 00:20:13.652 Test: blockdev writev readv size > 128k ...passed 00:20:13.652 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:13.652 Test: blockdev comparev and writev ...[2024-11-28 08:19:10.929520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:13.652 [2024-11-28 08:19:10.929568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:13.652 [2024-11-28 08:19:10.929585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:13.652 [2024-11-28 08:19:10.929594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.652 [2024-11-28 08:19:10.930135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:13.652 [2024-11-28 08:19:10.930148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:13.652 [2024-11-28 08:19:10.930167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:13.652 [2024-11-28 08:19:10.930176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:13.652 [2024-11-28 08:19:10.930709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:13.652 [2024-11-28 08:19:10.930721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:13.652 [2024-11-28 08:19:10.930735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:13.652 [2024-11-28 08:19:10.930743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:13.652 [2024-11-28 08:19:10.931327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:13.652 [2024-11-28 08:19:10.931338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:13.652 [2024-11-28 08:19:10.931352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:13.652 [2024-11-28 08:19:10.931360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:13.913 passed 00:20:13.913 Test: blockdev nvme passthru rw ...passed 00:20:13.913 Test: blockdev nvme passthru vendor specific ...[2024-11-28 08:19:11.017027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:13.913 [2024-11-28 08:19:11.017052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:13.913 [2024-11-28 08:19:11.017479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:13.913 [2024-11-28 08:19:11.017492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:13.913 [2024-11-28 08:19:11.017901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:13.913 [2024-11-28 08:19:11.017911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:13.913 [2024-11-28 08:19:11.018296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:13.913 [2024-11-28 08:19:11.018308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:13.913 passed 00:20:13.913 Test: blockdev nvme admin passthru ...passed 00:20:13.913 Test: blockdev copy ...passed 00:20:13.913 00:20:13.913 Run Summary: Type Total Ran Passed Failed Inactive 00:20:13.913 suites 1 1 n/a 0 0 00:20:13.913 tests 23 23 23 0 0 00:20:13.913 asserts 152 152 152 0 n/a 00:20:13.913 00:20:13.913 Elapsed time = 1.145 seconds 00:20:14.174 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:14.174 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.174 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:14.174 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.174 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:14.174 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:14.174 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:14.174 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:20:14.174 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:14.174 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:20:14.174 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:14.174 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:14.174 rmmod nvme_tcp 00:20:14.174 rmmod nvme_fabrics 00:20:14.174 rmmod nvme_keyring 00:20:14.174 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:14.436 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:20:14.436 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:20:14.436 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1977206 ']' 00:20:14.436 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1977206 00:20:14.436 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1977206 ']' 00:20:14.436 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1977206 00:20:14.436 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:20:14.436 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:14.436 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1977206 00:20:14.436 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:20:14.436 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:20:14.436 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1977206' 00:20:14.436 killing process with pid 1977206 00:20:14.436 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1977206 00:20:14.436 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1977206 00:20:14.697 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:14.697 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:14.697 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:14.697 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:20:14.697 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:20:14.697 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:14.697 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:20:14.697 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:14.697 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:14.697 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.697 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:14.697 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.246 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:17.246 00:20:17.246 real 0m12.773s 00:20:17.246 user 0m14.451s 00:20:17.246 sys 0m6.935s 00:20:17.246 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:17.246 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:17.246 ************************************ 00:20:17.246 END TEST nvmf_bdevio_no_huge 00:20:17.246 ************************************ 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:17.246 ************************************ 00:20:17.246 START TEST nvmf_tls 00:20:17.246 ************************************ 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:17.246 * Looking for test storage... 00:20:17.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:17.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.246 --rc genhtml_branch_coverage=1 00:20:17.246 --rc genhtml_function_coverage=1 00:20:17.246 --rc genhtml_legend=1 00:20:17.246 --rc geninfo_all_blocks=1 00:20:17.246 --rc geninfo_unexecuted_blocks=1 00:20:17.246 00:20:17.246 ' 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:17.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.246 --rc genhtml_branch_coverage=1 00:20:17.246 --rc genhtml_function_coverage=1 00:20:17.246 --rc genhtml_legend=1 00:20:17.246 --rc geninfo_all_blocks=1 00:20:17.246 --rc geninfo_unexecuted_blocks=1 00:20:17.246 00:20:17.246 ' 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:17.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.246 --rc genhtml_branch_coverage=1 00:20:17.246 --rc genhtml_function_coverage=1 00:20:17.246 --rc genhtml_legend=1 00:20:17.246 --rc geninfo_all_blocks=1 00:20:17.246 --rc geninfo_unexecuted_blocks=1 00:20:17.246 00:20:17.246 ' 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:17.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.246 --rc genhtml_branch_coverage=1 00:20:17.246 --rc genhtml_function_coverage=1 00:20:17.246 --rc genhtml_legend=1 00:20:17.246 --rc geninfo_all_blocks=1 00:20:17.246 --rc geninfo_unexecuted_blocks=1 00:20:17.246 00:20:17.246 ' 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:17.246 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:17.247 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:20:17.247 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:25.390 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:25.390 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:25.390 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:25.390 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:25.390 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:25.391 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:25.391 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:25.391 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:25.391 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:25.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:25.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:20:25.391 00:20:25.391 --- 10.0.0.2 ping statistics --- 00:20:25.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.391 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:20:25.391 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:25.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:25.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:20:25.391 00:20:25.391 --- 10.0.0.1 ping statistics --- 00:20:25.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.391 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:20:25.391 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:25.391 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:20:25.391 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:25.391 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:25.391 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:25.391 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:25.391 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:25.391 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:25.391 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:25.391 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:25.391 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:25.391 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:25.391 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.391 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1981926 00:20:25.391 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1981926 00:20:25.391 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:25.391 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1981926 ']' 00:20:25.391 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.391 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:25.391 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.391 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:25.391 08:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.391 [2024-11-28 08:19:21.922141] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:20:25.391 [2024-11-28 08:19:21.922252] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:25.391 [2024-11-28 08:19:22.025408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.391 [2024-11-28 08:19:22.075009] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:25.391 [2024-11-28 08:19:22.075064] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:25.391 [2024-11-28 08:19:22.075073] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:25.391 [2024-11-28 08:19:22.075080] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:25.391 [2024-11-28 08:19:22.075087] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:25.391 [2024-11-28 08:19:22.075850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.653 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:25.653 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:25.653 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:25.653 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:25.653 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.653 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.653 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:20:25.653 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:25.915 true 00:20:25.915 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:25.915 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:20:25.915 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:20:25.915 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:20:25.915 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:26.177 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:26.177 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:20:26.438 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:20:26.438 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:20:26.438 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:26.699 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:26.699 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:20:26.699 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:20:26.699 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:20:26.699 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:26.699 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:20:26.959 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:20:26.959 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:20:26.959 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:27.219 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:27.219 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:20:27.219 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:20:27.219 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:20:27.219 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:27.480 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:27.480 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:20:27.741 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:20:27.741 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:20:27.741 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:27.741 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:27.741 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:27.741 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:27.741 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:27.741 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:27.741 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:27.741 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:27.741 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:27.741 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:27.741 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:27.741 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:27.741 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:20:27.741 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:27.741 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:27.741 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:27.741 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:27.741 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.gJmiaQvymR 00:20:27.741 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:20:27.741 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.KWlEBXywlp 00:20:27.741 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:27.741 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:27.741 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.gJmiaQvymR 00:20:27.741 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.KWlEBXywlp 00:20:27.741 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:28.001 08:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:28.261 08:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.gJmiaQvymR 00:20:28.261 08:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.gJmiaQvymR 00:20:28.261 08:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:28.521 [2024-11-28 08:19:25.590481] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:28.521 08:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:28.521 08:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:28.782 [2024-11-28 08:19:25.915214] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:28.782 [2024-11-28 08:19:25.915425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:28.782 08:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:29.044 malloc0 00:20:29.044 08:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:29.044 08:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.gJmiaQvymR 00:20:29.305 08:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:29.305 08:19:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.gJmiaQvymR 00:20:41.534 Initializing NVMe Controllers 00:20:41.534 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:41.534 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:41.534 Initialization complete. Launching workers. 00:20:41.534 ======================================================== 00:20:41.534 Latency(us) 00:20:41.534 Device Information : IOPS MiB/s Average min max 00:20:41.534 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18587.85 72.61 3443.34 1107.44 5368.73 00:20:41.534 ======================================================== 00:20:41.534 Total : 18587.85 72.61 3443.34 1107.44 5368.73 00:20:41.534 00:20:41.534 08:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gJmiaQvymR 00:20:41.534 08:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:41.534 08:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:41.534 08:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:41.534 08:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.gJmiaQvymR 00:20:41.534 08:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:41.534 08:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1984686 00:20:41.534 08:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:41.534 08:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1984686 /var/tmp/bdevperf.sock 00:20:41.534 08:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1984686 ']' 00:20:41.534 08:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:41.534 08:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:41.534 08:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:41.534 08:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:41.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:41.534 08:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:41.534 08:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.534 [2024-11-28 08:19:36.760221] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:20:41.534 [2024-11-28 08:19:36.760279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1984686 ] 00:20:41.534 [2024-11-28 08:19:36.849256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.534 [2024-11-28 08:19:36.884731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:41.534 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:41.534 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:41.534 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gJmiaQvymR 00:20:41.534 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:41.534 [2024-11-28 08:19:37.885452] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:41.534 TLSTESTn1 00:20:41.534 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:41.534 Running I/O for 10 seconds... 00:20:43.177 5732.00 IOPS, 22.39 MiB/s [2024-11-28T07:19:41.407Z] 5518.00 IOPS, 21.55 MiB/s [2024-11-28T07:19:42.346Z] 5282.00 IOPS, 20.63 MiB/s [2024-11-28T07:19:43.288Z] 5560.00 IOPS, 21.72 MiB/s [2024-11-28T07:19:44.228Z] 5689.60 IOPS, 22.23 MiB/s [2024-11-28T07:19:45.338Z] 5791.50 IOPS, 22.62 MiB/s [2024-11-28T07:19:46.279Z] 5766.14 IOPS, 22.52 MiB/s [2024-11-28T07:19:47.219Z] 5726.75 IOPS, 22.37 MiB/s [2024-11-28T07:19:48.159Z] 5620.67 IOPS, 21.96 MiB/s [2024-11-28T07:19:48.159Z] 5701.70 IOPS, 22.27 MiB/s 00:20:50.870 Latency(us) 00:20:50.870 [2024-11-28T07:19:48.159Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.870 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:50.870 Verification LBA range: start 0x0 length 0x2000 00:20:50.870 TLSTESTn1 : 10.01 5706.57 22.29 0.00 0.00 22395.74 5352.11 32331.09 00:20:50.870 [2024-11-28T07:19:48.159Z] =================================================================================================================== 00:20:50.870 [2024-11-28T07:19:48.159Z] Total : 5706.57 22.29 0.00 0.00 22395.74 5352.11 32331.09 00:20:50.870 { 00:20:50.870 "results": [ 00:20:50.870 { 00:20:50.870 "job": "TLSTESTn1", 00:20:50.870 "core_mask": "0x4", 00:20:50.870 "workload": "verify", 00:20:50.870 "status": "finished", 00:20:50.870 "verify_range": { 00:20:50.870 "start": 0, 00:20:50.870 "length": 8192 00:20:50.870 }, 00:20:50.870 "queue_depth": 128, 00:20:50.870 "io_size": 4096, 00:20:50.870 "runtime": 10.013898, 00:20:50.870 "iops": 5706.569010389361, 00:20:50.870 "mibps": 22.291285196833442, 00:20:50.870 "io_failed": 0, 00:20:50.870 "io_timeout": 0, 00:20:50.870 "avg_latency_us": 22395.743890804097, 00:20:50.870 "min_latency_us": 5352.106666666667, 00:20:50.870 "max_latency_us": 32331.093333333334 00:20:50.870 } 00:20:50.870 ], 00:20:50.870 "core_count": 1 00:20:50.870 } 00:20:50.870 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:50.870 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1984686 00:20:50.870 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1984686 ']' 00:20:50.870 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1984686 00:20:50.870 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:50.870 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:50.870 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1984686 00:20:51.130 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:51.130 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:51.130 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1984686' 00:20:51.130 killing process with pid 1984686 00:20:51.130 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1984686 00:20:51.130 Received shutdown signal, test time was about 10.000000 seconds 00:20:51.130 00:20:51.130 Latency(us) 00:20:51.130 [2024-11-28T07:19:48.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.130 [2024-11-28T07:19:48.419Z] =================================================================================================================== 00:20:51.130 [2024-11-28T07:19:48.419Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:51.130 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1984686 00:20:51.130 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KWlEBXywlp 00:20:51.130 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:51.130 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KWlEBXywlp 00:20:51.130 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:51.130 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:51.130 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:51.130 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:51.130 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KWlEBXywlp 00:20:51.130 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:51.130 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:51.130 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:51.130 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.KWlEBXywlp 00:20:51.130 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:51.130 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1987033 00:20:51.130 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:51.130 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1987033 /var/tmp/bdevperf.sock 00:20:51.130 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:51.130 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1987033 ']' 00:20:51.130 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:51.130 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:51.130 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:51.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:51.130 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:51.130 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.130 [2024-11-28 08:19:48.357638] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:20:51.130 [2024-11-28 08:19:48.357693] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1987033 ] 00:20:51.390 [2024-11-28 08:19:48.441941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.390 [2024-11-28 08:19:48.470059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:51.961 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.961 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:51.961 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KWlEBXywlp 00:20:52.222 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:52.222 [2024-11-28 08:19:49.493962] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:52.222 [2024-11-28 08:19:49.498570] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:52.222 [2024-11-28 08:19:49.499195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1958be0 (107): Transport endpoint is not connected 00:20:52.222 [2024-11-28 08:19:49.500190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1958be0 (9): Bad file descriptor 00:20:52.222 [2024-11-28 08:19:49.501191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:52.222 [2024-11-28 08:19:49.501199] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:52.222 [2024-11-28 08:19:49.501204] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:52.222 [2024-11-28 08:19:49.501211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:52.222 request: 00:20:52.222 { 00:20:52.222 "name": "TLSTEST", 00:20:52.222 "trtype": "tcp", 00:20:52.222 "traddr": "10.0.0.2", 00:20:52.222 "adrfam": "ipv4", 00:20:52.222 "trsvcid": "4420", 00:20:52.222 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:52.222 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:52.222 "prchk_reftag": false, 00:20:52.222 "prchk_guard": false, 00:20:52.222 "hdgst": false, 00:20:52.222 "ddgst": false, 00:20:52.222 "psk": "key0", 00:20:52.222 "allow_unrecognized_csi": false, 00:20:52.222 "method": "bdev_nvme_attach_controller", 00:20:52.222 "req_id": 1 00:20:52.222 } 00:20:52.222 Got JSON-RPC error response 00:20:52.222 response: 00:20:52.222 { 00:20:52.222 "code": -5, 00:20:52.222 "message": "Input/output error" 00:20:52.222 } 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1987033 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1987033 ']' 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1987033 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1987033 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1987033' 00:20:52.482 killing process with pid 1987033 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1987033 00:20:52.482 Received shutdown signal, test time was about 10.000000 seconds 00:20:52.482 00:20:52.482 Latency(us) 00:20:52.482 [2024-11-28T07:19:49.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:52.482 [2024-11-28T07:19:49.771Z] =================================================================================================================== 00:20:52.482 [2024-11-28T07:19:49.771Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1987033 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.gJmiaQvymR 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.gJmiaQvymR 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.gJmiaQvymR 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.gJmiaQvymR 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1987371 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1987371 /var/tmp/bdevperf.sock 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1987371 ']' 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:52.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:52.482 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:52.482 [2024-11-28 08:19:49.748359] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:20:52.482 [2024-11-28 08:19:49.748414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1987371 ] 00:20:52.742 [2024-11-28 08:19:49.831096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.742 [2024-11-28 08:19:49.859037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.742 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:52.742 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:52.742 08:19:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gJmiaQvymR 00:20:53.002 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:20:53.002 [2024-11-28 08:19:50.269425] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:53.002 [2024-11-28 08:19:50.277941] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:53.002 [2024-11-28 08:19:50.277961] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:53.002 [2024-11-28 08:19:50.277981] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:53.002 [2024-11-28 08:19:50.278595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153ebe0 (107): Transport endpoint is not connected 00:20:53.002 [2024-11-28 08:19:50.279590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153ebe0 (9): Bad file descriptor 00:20:53.002 [2024-11-28 08:19:50.280592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:53.002 [2024-11-28 08:19:50.280598] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:53.002 [2024-11-28 08:19:50.280604] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:53.002 [2024-11-28 08:19:50.280611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:53.002 request: 00:20:53.002 { 00:20:53.002 "name": "TLSTEST", 00:20:53.002 "trtype": "tcp", 00:20:53.002 "traddr": "10.0.0.2", 00:20:53.002 "adrfam": "ipv4", 00:20:53.002 "trsvcid": "4420", 00:20:53.002 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.002 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:53.002 "prchk_reftag": false, 00:20:53.002 "prchk_guard": false, 00:20:53.002 "hdgst": false, 00:20:53.002 "ddgst": false, 00:20:53.002 "psk": "key0", 00:20:53.002 "allow_unrecognized_csi": false, 00:20:53.002 "method": "bdev_nvme_attach_controller", 00:20:53.002 "req_id": 1 00:20:53.002 } 00:20:53.002 Got JSON-RPC error response 00:20:53.002 response: 00:20:53.002 { 00:20:53.002 "code": -5, 00:20:53.002 "message": "Input/output error" 00:20:53.002 } 00:20:53.262 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1987371 00:20:53.262 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1987371 ']' 00:20:53.262 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1987371 00:20:53.262 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:53.262 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:53.262 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1987371 00:20:53.262 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:53.262 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:53.262 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1987371' 00:20:53.262 killing process with pid 1987371 00:20:53.262 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1987371 00:20:53.262 Received shutdown signal, test time was about 10.000000 seconds 00:20:53.262 00:20:53.262 Latency(us) 00:20:53.262 [2024-11-28T07:19:50.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.262 [2024-11-28T07:19:50.551Z] =================================================================================================================== 00:20:53.262 [2024-11-28T07:19:50.551Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:53.262 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1987371 00:20:53.262 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:53.262 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:53.262 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:53.262 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:53.262 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:53.262 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.gJmiaQvymR 00:20:53.262 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:53.262 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.gJmiaQvymR 00:20:53.262 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:53.262 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:53.262 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:53.262 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:53.262 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.gJmiaQvymR 00:20:53.262 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:53.262 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:53.262 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:53.262 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.gJmiaQvymR 00:20:53.262 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:53.262 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1987391 00:20:53.262 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:53.262 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1987391 /var/tmp/bdevperf.sock 00:20:53.262 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:53.263 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1987391 ']' 00:20:53.263 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:53.263 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:53.263 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:53.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:53.263 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:53.263 08:19:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:53.263 [2024-11-28 08:19:50.533381] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:20:53.263 [2024-11-28 08:19:50.533437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1987391 ] 00:20:53.522 [2024-11-28 08:19:50.617375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.522 [2024-11-28 08:19:50.645395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:54.092 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:54.092 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:54.092 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gJmiaQvymR 00:20:54.353 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:54.613 [2024-11-28 08:19:51.665458] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:54.613 [2024-11-28 08:19:51.671296] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:54.613 [2024-11-28 08:19:51.671314] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:54.613 [2024-11-28 08:19:51.671332] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:54.613 [2024-11-28 08:19:51.671743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1672be0 (107): Transport endpoint is not connected 00:20:54.613 [2024-11-28 08:19:51.672740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1672be0 (9): Bad file descriptor 00:20:54.613 [2024-11-28 08:19:51.673742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:20:54.613 [2024-11-28 08:19:51.673749] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:54.613 [2024-11-28 08:19:51.673755] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:20:54.613 [2024-11-28 08:19:51.673762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:20:54.613 request: 00:20:54.613 { 00:20:54.613 "name": "TLSTEST", 00:20:54.613 "trtype": "tcp", 00:20:54.613 "traddr": "10.0.0.2", 00:20:54.613 "adrfam": "ipv4", 00:20:54.613 "trsvcid": "4420", 00:20:54.613 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:54.613 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:54.613 "prchk_reftag": false, 00:20:54.613 "prchk_guard": false, 00:20:54.613 "hdgst": false, 00:20:54.613 "ddgst": false, 00:20:54.613 "psk": "key0", 00:20:54.613 "allow_unrecognized_csi": false, 00:20:54.613 "method": "bdev_nvme_attach_controller", 00:20:54.613 "req_id": 1 00:20:54.613 } 00:20:54.613 Got JSON-RPC error response 00:20:54.613 response: 00:20:54.613 { 00:20:54.613 "code": -5, 00:20:54.613 "message": "Input/output error" 00:20:54.613 } 00:20:54.613 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1987391 00:20:54.613 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1987391 ']' 00:20:54.613 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1987391 00:20:54.613 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:54.613 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:54.613 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1987391 00:20:54.613 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:54.613 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:54.613 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1987391' 00:20:54.613 killing process with pid 1987391 00:20:54.613 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1987391 00:20:54.613 Received shutdown signal, test time was about 10.000000 seconds 00:20:54.613 00:20:54.613 Latency(us) 00:20:54.613 [2024-11-28T07:19:51.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.613 [2024-11-28T07:19:51.902Z] =================================================================================================================== 00:20:54.613 [2024-11-28T07:19:51.902Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:54.614 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1987391 00:20:54.614 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:54.614 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:54.614 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:54.614 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:54.614 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:54.614 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:54.614 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:54.614 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:54.614 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:54.614 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:54.614 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:54.614 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:54.614 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:54.614 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:54.614 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:54.614 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:54.614 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:54.614 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:54.614 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1987731 00:20:54.614 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:54.614 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1987731 /var/tmp/bdevperf.sock 00:20:54.614 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:54.614 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1987731 ']' 00:20:54.614 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:54.614 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:54.614 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:54.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:54.614 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:54.614 08:19:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.874 [2024-11-28 08:19:51.918591] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:20:54.874 [2024-11-28 08:19:51.918647] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1987731 ] 00:20:54.874 [2024-11-28 08:19:52.004163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.874 [2024-11-28 08:19:52.032447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:55.444 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:55.444 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:55.444 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:20:55.705 [2024-11-28 08:19:52.876006] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:20:55.705 [2024-11-28 08:19:52.876030] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:55.705 request: 00:20:55.705 { 00:20:55.705 "name": "key0", 00:20:55.705 "path": "", 00:20:55.705 "method": "keyring_file_add_key", 00:20:55.705 "req_id": 1 00:20:55.705 } 00:20:55.705 Got JSON-RPC error response 00:20:55.705 response: 00:20:55.705 { 00:20:55.705 "code": -1, 00:20:55.705 "message": "Operation not permitted" 00:20:55.705 } 00:20:55.705 08:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:55.966 [2024-11-28 08:19:53.060541] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:55.966 [2024-11-28 08:19:53.060562] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:55.966 request: 00:20:55.966 { 00:20:55.966 "name": "TLSTEST", 00:20:55.966 "trtype": "tcp", 00:20:55.966 "traddr": "10.0.0.2", 00:20:55.966 "adrfam": "ipv4", 00:20:55.966 "trsvcid": "4420", 00:20:55.966 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:55.966 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:55.966 "prchk_reftag": false, 00:20:55.966 "prchk_guard": false, 00:20:55.966 "hdgst": false, 00:20:55.966 "ddgst": false, 00:20:55.966 "psk": "key0", 00:20:55.966 "allow_unrecognized_csi": false, 00:20:55.966 "method": "bdev_nvme_attach_controller", 00:20:55.966 "req_id": 1 00:20:55.966 } 00:20:55.966 Got JSON-RPC error response 00:20:55.966 response: 00:20:55.966 { 00:20:55.966 "code": -126, 00:20:55.966 "message": "Required key not available" 00:20:55.966 } 00:20:55.966 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1987731 00:20:55.966 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1987731 ']' 00:20:55.966 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1987731 00:20:55.966 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:55.966 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:55.966 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1987731 00:20:55.966 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:55.967 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:55.967 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1987731' 00:20:55.967 killing process with pid 1987731 00:20:55.967 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1987731 00:20:55.967 Received shutdown signal, test time was about 10.000000 seconds 00:20:55.967 00:20:55.967 Latency(us) 00:20:55.967 [2024-11-28T07:19:53.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.967 [2024-11-28T07:19:53.256Z] =================================================================================================================== 00:20:55.967 [2024-11-28T07:19:53.256Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:55.967 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1987731 00:20:55.967 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:55.967 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:55.967 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:55.967 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:55.967 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:55.967 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1981926 00:20:55.967 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1981926 ']' 00:20:55.967 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1981926 00:20:56.227 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:56.227 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:56.227 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1981926 00:20:56.227 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:56.227 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:56.227 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1981926' 00:20:56.227 killing process with pid 1981926 00:20:56.227 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1981926 00:20:56.227 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1981926 00:20:56.227 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:56.227 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:56.227 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:56.227 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:56.227 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:56.227 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:20:56.227 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:56.227 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:56.227 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:20:56.227 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.C9ubOQ6oap 00:20:56.227 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:56.227 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.C9ubOQ6oap 00:20:56.227 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:20:56.227 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:56.227 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:56.227 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:56.227 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1988090 00:20:56.227 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1988090 00:20:56.227 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:56.227 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1988090 ']' 00:20:56.227 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.227 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.227 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.227 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.227 08:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:56.487 [2024-11-28 08:19:53.542241] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:20:56.487 [2024-11-28 08:19:53.542302] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.487 [2024-11-28 08:19:53.629977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.487 [2024-11-28 08:19:53.662455] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.487 [2024-11-28 08:19:53.662488] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.487 [2024-11-28 08:19:53.662494] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:56.487 [2024-11-28 08:19:53.662498] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:56.487 [2024-11-28 08:19:53.662502] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.487 [2024-11-28 08:19:53.663006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.427 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:57.427 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:57.427 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:57.427 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:57.427 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.427 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:57.427 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.C9ubOQ6oap 00:20:57.428 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.C9ubOQ6oap 00:20:57.428 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:57.428 [2024-11-28 08:19:54.541995] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.428 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:57.688 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:57.688 [2024-11-28 08:19:54.862782] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:57.688 [2024-11-28 08:19:54.862990] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:57.688 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:57.954 malloc0 00:20:57.954 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:57.954 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.C9ubOQ6oap 00:20:58.222 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:58.482 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.C9ubOQ6oap 00:20:58.482 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:58.482 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:58.482 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:58.482 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.C9ubOQ6oap 00:20:58.482 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:58.482 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:58.482 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1988456 00:20:58.482 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:58.482 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1988456 /var/tmp/bdevperf.sock 00:20:58.482 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1988456 ']' 00:20:58.482 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:58.482 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:58.482 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:58.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:58.482 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:58.482 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.482 [2024-11-28 08:19:55.543045] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:20:58.482 [2024-11-28 08:19:55.543097] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1988456 ] 00:20:58.482 [2024-11-28 08:19:55.625373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.482 [2024-11-28 08:19:55.654467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:58.482 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:58.482 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:58.482 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.C9ubOQ6oap 00:20:58.744 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:59.005 [2024-11-28 08:19:56.036656] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:59.005 TLSTESTn1 00:20:59.005 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:59.006 Running I/O for 10 seconds... 00:21:01.330 5887.00 IOPS, 23.00 MiB/s [2024-11-28T07:19:59.558Z] 5598.00 IOPS, 21.87 MiB/s [2024-11-28T07:20:00.497Z] 5556.00 IOPS, 21.70 MiB/s [2024-11-28T07:20:01.438Z] 5702.25 IOPS, 22.27 MiB/s [2024-11-28T07:20:02.377Z] 5731.80 IOPS, 22.39 MiB/s [2024-11-28T07:20:03.319Z] 5481.67 IOPS, 21.41 MiB/s [2024-11-28T07:20:04.259Z] 5503.86 IOPS, 21.50 MiB/s [2024-11-28T07:20:05.644Z] 5550.50 IOPS, 21.68 MiB/s [2024-11-28T07:20:06.586Z] 5456.44 IOPS, 21.31 MiB/s [2024-11-28T07:20:06.586Z] 5405.90 IOPS, 21.12 MiB/s 00:21:09.297 Latency(us) 00:21:09.297 [2024-11-28T07:20:06.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.297 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:09.297 Verification LBA range: start 0x0 length 0x2000 00:21:09.297 TLSTESTn1 : 10.02 5407.44 21.12 0.00 0.00 23629.48 4587.52 24576.00 00:21:09.297 [2024-11-28T07:20:06.586Z] =================================================================================================================== 00:21:09.297 [2024-11-28T07:20:06.586Z] Total : 5407.44 21.12 0.00 0.00 23629.48 4587.52 24576.00 00:21:09.297 { 00:21:09.297 "results": [ 00:21:09.297 { 00:21:09.297 "job": "TLSTESTn1", 00:21:09.297 "core_mask": "0x4", 00:21:09.297 "workload": "verify", 00:21:09.297 "status": "finished", 00:21:09.297 "verify_range": { 00:21:09.297 "start": 0, 00:21:09.297 "length": 8192 00:21:09.297 }, 00:21:09.297 "queue_depth": 128, 00:21:09.297 "io_size": 4096, 00:21:09.297 "runtime": 10.020821, 00:21:09.297 "iops": 5407.441166746717, 00:21:09.297 "mibps": 21.12281705760436, 00:21:09.297 "io_failed": 0, 00:21:09.297 "io_timeout": 0, 00:21:09.297 "avg_latency_us": 23629.480090304565, 00:21:09.297 "min_latency_us": 4587.52, 00:21:09.297 "max_latency_us": 24576.0 00:21:09.297 } 00:21:09.297 ], 00:21:09.297 "core_count": 1 00:21:09.297 } 00:21:09.297 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:09.297 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1988456 00:21:09.297 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1988456 ']' 00:21:09.297 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1988456 00:21:09.297 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:09.297 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:09.297 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1988456 00:21:09.297 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:09.297 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:09.297 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1988456' 00:21:09.297 killing process with pid 1988456 00:21:09.297 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1988456 00:21:09.297 Received shutdown signal, test time was about 10.000000 seconds 00:21:09.297 00:21:09.297 Latency(us) 00:21:09.297 [2024-11-28T07:20:06.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.297 [2024-11-28T07:20:06.586Z] =================================================================================================================== 00:21:09.297 [2024-11-28T07:20:06.586Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:09.297 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1988456 00:21:09.297 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.C9ubOQ6oap 00:21:09.297 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.C9ubOQ6oap 00:21:09.297 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:09.297 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.C9ubOQ6oap 00:21:09.297 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:09.297 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:09.297 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:09.297 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:09.297 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.C9ubOQ6oap 00:21:09.297 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:09.297 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:09.297 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:09.297 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.C9ubOQ6oap 00:21:09.297 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:09.297 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1990501 00:21:09.297 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:09.297 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1990501 /var/tmp/bdevperf.sock 00:21:09.297 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:09.297 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1990501 ']' 00:21:09.297 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:09.297 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:09.297 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:09.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:09.297 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:09.297 08:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.297 [2024-11-28 08:20:06.499631] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:21:09.297 [2024-11-28 08:20:06.499689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1990501 ] 00:21:09.297 [2024-11-28 08:20:06.582067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.558 [2024-11-28 08:20:06.610948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:10.128 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:10.128 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:10.128 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.C9ubOQ6oap 00:21:10.389 [2024-11-28 08:20:07.422174] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.C9ubOQ6oap': 0100666 00:21:10.389 [2024-11-28 08:20:07.422193] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:10.389 request: 00:21:10.389 { 00:21:10.389 "name": "key0", 00:21:10.389 "path": "/tmp/tmp.C9ubOQ6oap", 00:21:10.389 "method": "keyring_file_add_key", 00:21:10.389 "req_id": 1 00:21:10.389 } 00:21:10.389 Got JSON-RPC error response 00:21:10.389 response: 00:21:10.389 { 00:21:10.389 "code": -1, 00:21:10.389 "message": "Operation not permitted" 00:21:10.389 } 00:21:10.389 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:10.389 [2024-11-28 08:20:07.590661] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:10.389 [2024-11-28 08:20:07.590682] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:21:10.389 request: 00:21:10.389 { 00:21:10.389 "name": "TLSTEST", 00:21:10.389 "trtype": "tcp", 00:21:10.389 "traddr": "10.0.0.2", 00:21:10.389 "adrfam": "ipv4", 00:21:10.389 "trsvcid": "4420", 00:21:10.389 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.389 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:10.389 "prchk_reftag": false, 00:21:10.389 "prchk_guard": false, 00:21:10.389 "hdgst": false, 00:21:10.389 "ddgst": false, 00:21:10.389 "psk": "key0", 00:21:10.389 "allow_unrecognized_csi": false, 00:21:10.389 "method": "bdev_nvme_attach_controller", 00:21:10.389 "req_id": 1 00:21:10.389 } 00:21:10.389 Got JSON-RPC error response 00:21:10.389 response: 00:21:10.389 { 00:21:10.389 "code": -126, 00:21:10.389 "message": "Required key not available" 00:21:10.389 } 00:21:10.389 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1990501 00:21:10.389 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1990501 ']' 00:21:10.389 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1990501 00:21:10.389 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:10.389 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:10.389 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1990501 00:21:10.650 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:10.650 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:10.650 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1990501' 00:21:10.650 killing process with pid 1990501 00:21:10.650 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1990501 00:21:10.650 Received shutdown signal, test time was about 10.000000 seconds 00:21:10.650 00:21:10.650 Latency(us) 00:21:10.650 [2024-11-28T07:20:07.939Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.650 [2024-11-28T07:20:07.939Z] =================================================================================================================== 00:21:10.650 [2024-11-28T07:20:07.939Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:10.650 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1990501 00:21:10.650 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:10.650 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:10.650 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:10.650 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:10.650 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:10.650 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1988090 00:21:10.650 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1988090 ']' 00:21:10.650 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1988090 00:21:10.650 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:10.650 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:10.650 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1988090 00:21:10.650 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:10.650 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:10.650 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1988090' 00:21:10.650 killing process with pid 1988090 00:21:10.650 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1988090 00:21:10.650 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1988090 00:21:10.912 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:21:10.912 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:10.912 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:10.912 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:10.912 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1990816 00:21:10.912 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1990816 00:21:10.912 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:10.912 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1990816 ']' 00:21:10.912 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.912 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:10.912 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.912 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:10.912 08:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:10.912 [2024-11-28 08:20:08.017635] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:21:10.912 [2024-11-28 08:20:08.017688] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.912 [2024-11-28 08:20:08.106086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.912 [2024-11-28 08:20:08.134583] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.912 [2024-11-28 08:20:08.134615] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.912 [2024-11-28 08:20:08.134620] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:10.912 [2024-11-28 08:20:08.134625] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:10.912 [2024-11-28 08:20:08.134629] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.912 [2024-11-28 08:20:08.135087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.853 08:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:11.853 08:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:11.853 08:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:11.853 08:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:11.853 08:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.853 08:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.853 08:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.C9ubOQ6oap 00:21:11.853 08:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:11.853 08:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.C9ubOQ6oap 00:21:11.853 08:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:21:11.853 08:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:11.853 08:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:21:11.853 08:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:11.853 08:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.C9ubOQ6oap 00:21:11.853 08:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.C9ubOQ6oap 00:21:11.853 08:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:11.853 [2024-11-28 08:20:09.015419] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.853 08:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:12.114 08:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:12.114 [2024-11-28 08:20:09.376300] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:12.114 [2024-11-28 08:20:09.376518] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:12.373 08:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:12.374 malloc0 00:21:12.374 08:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:12.634 08:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.C9ubOQ6oap 00:21:12.894 [2024-11-28 08:20:09.923135] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.C9ubOQ6oap': 0100666 00:21:12.894 [2024-11-28 08:20:09.923156] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:12.894 request: 00:21:12.894 { 00:21:12.894 "name": "key0", 00:21:12.894 "path": "/tmp/tmp.C9ubOQ6oap", 00:21:12.894 "method": "keyring_file_add_key", 00:21:12.894 "req_id": 1 00:21:12.894 } 00:21:12.894 Got JSON-RPC error response 00:21:12.894 response: 00:21:12.894 { 00:21:12.894 "code": -1, 00:21:12.894 "message": "Operation not permitted" 00:21:12.894 } 00:21:12.894 08:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:12.894 [2024-11-28 08:20:10.099610] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:21:12.894 [2024-11-28 08:20:10.099652] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:12.894 request: 00:21:12.894 { 00:21:12.894 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.894 "host": "nqn.2016-06.io.spdk:host1", 00:21:12.894 "psk": "key0", 00:21:12.894 "method": "nvmf_subsystem_add_host", 00:21:12.894 "req_id": 1 00:21:12.894 } 00:21:12.894 Got JSON-RPC error response 00:21:12.894 response: 00:21:12.894 { 00:21:12.894 "code": -32603, 00:21:12.894 "message": "Internal error" 00:21:12.894 } 00:21:12.894 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:12.894 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:12.894 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:12.894 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:12.894 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1990816 00:21:12.894 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1990816 ']' 00:21:12.894 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1990816 00:21:12.894 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:12.894 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:12.894 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1990816 00:21:13.154 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:13.154 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:13.154 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1990816' 00:21:13.154 killing process with pid 1990816 00:21:13.154 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1990816 00:21:13.154 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1990816 00:21:13.154 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.C9ubOQ6oap 00:21:13.154 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:21:13.154 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:13.154 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:13.154 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.154 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1991412 00:21:13.154 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1991412 00:21:13.154 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:13.154 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1991412 ']' 00:21:13.154 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.154 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:13.154 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.154 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:13.154 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.154 [2024-11-28 08:20:10.376331] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:21:13.154 [2024-11-28 08:20:10.376391] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.415 [2024-11-28 08:20:10.465912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.415 [2024-11-28 08:20:10.495792] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.415 [2024-11-28 08:20:10.495821] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.415 [2024-11-28 08:20:10.495826] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.415 [2024-11-28 08:20:10.495831] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.415 [2024-11-28 08:20:10.495835] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.415 [2024-11-28 08:20:10.496303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:13.985 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:13.985 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:13.985 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:13.985 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:13.985 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.985 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:13.985 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.C9ubOQ6oap 00:21:13.985 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.C9ubOQ6oap 00:21:13.985 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:14.245 [2024-11-28 08:20:11.352675] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.245 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:14.506 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:14.506 [2024-11-28 08:20:11.717578] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:14.506 [2024-11-28 08:20:11.717765] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:14.506 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:14.766 malloc0 00:21:14.766 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:15.027 08:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.C9ubOQ6oap 00:21:15.027 08:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:15.290 08:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:15.290 08:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1991875 00:21:15.290 08:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:15.290 08:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1991875 /var/tmp/bdevperf.sock 00:21:15.290 08:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1991875 ']' 00:21:15.290 08:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:15.290 08:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:15.290 08:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:15.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:15.290 08:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:15.290 08:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.290 [2024-11-28 08:20:12.511620] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:21:15.290 [2024-11-28 08:20:12.511673] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1991875 ] 00:21:15.551 [2024-11-28 08:20:12.597033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.551 [2024-11-28 08:20:12.625924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:15.551 08:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:15.551 08:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:15.551 08:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.C9ubOQ6oap 00:21:15.812 08:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:15.812 [2024-11-28 08:20:13.044448] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:16.072 TLSTESTn1 00:21:16.072 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:16.335 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:21:16.335 "subsystems": [ 00:21:16.335 { 00:21:16.335 "subsystem": "keyring", 00:21:16.335 "config": [ 00:21:16.335 { 00:21:16.335 "method": "keyring_file_add_key", 00:21:16.335 "params": { 00:21:16.335 "name": "key0", 00:21:16.335 "path": "/tmp/tmp.C9ubOQ6oap" 00:21:16.335 } 00:21:16.335 } 00:21:16.335 ] 00:21:16.335 }, 00:21:16.335 { 00:21:16.335 "subsystem": "iobuf", 00:21:16.335 "config": [ 00:21:16.335 { 00:21:16.335 "method": "iobuf_set_options", 00:21:16.335 "params": { 00:21:16.335 "small_pool_count": 8192, 00:21:16.335 "large_pool_count": 1024, 00:21:16.335 "small_bufsize": 8192, 00:21:16.335 "large_bufsize": 135168, 00:21:16.335 "enable_numa": false 00:21:16.335 } 00:21:16.335 } 00:21:16.335 ] 00:21:16.335 }, 00:21:16.335 { 00:21:16.335 "subsystem": "sock", 00:21:16.335 "config": [ 00:21:16.335 { 00:21:16.335 "method": "sock_set_default_impl", 00:21:16.335 "params": { 00:21:16.335 "impl_name": "posix" 00:21:16.335 } 00:21:16.335 }, 00:21:16.335 { 00:21:16.335 "method": "sock_impl_set_options", 00:21:16.335 "params": { 00:21:16.335 "impl_name": "ssl", 00:21:16.336 "recv_buf_size": 4096, 00:21:16.336 "send_buf_size": 4096, 00:21:16.336 "enable_recv_pipe": true, 00:21:16.336 "enable_quickack": false, 00:21:16.336 "enable_placement_id": 0, 00:21:16.336 "enable_zerocopy_send_server": true, 00:21:16.336 "enable_zerocopy_send_client": false, 00:21:16.336 "zerocopy_threshold": 0, 00:21:16.336 "tls_version": 0, 00:21:16.336 "enable_ktls": false 00:21:16.336 } 00:21:16.336 }, 00:21:16.336 { 00:21:16.336 "method": "sock_impl_set_options", 00:21:16.336 "params": { 00:21:16.336 "impl_name": "posix", 00:21:16.336 "recv_buf_size": 2097152, 00:21:16.336 "send_buf_size": 2097152, 00:21:16.336 "enable_recv_pipe": true, 00:21:16.336 "enable_quickack": false, 00:21:16.336 "enable_placement_id": 0, 00:21:16.336 "enable_zerocopy_send_server": true, 00:21:16.336 "enable_zerocopy_send_client": false, 00:21:16.336 "zerocopy_threshold": 0, 00:21:16.336 "tls_version": 0, 00:21:16.336 "enable_ktls": false 00:21:16.336 } 00:21:16.336 } 00:21:16.336 ] 00:21:16.336 }, 00:21:16.336 { 00:21:16.336 "subsystem": "vmd", 00:21:16.336 "config": [] 00:21:16.336 }, 00:21:16.336 { 00:21:16.336 "subsystem": "accel", 00:21:16.336 "config": [ 00:21:16.336 { 00:21:16.336 "method": "accel_set_options", 00:21:16.336 "params": { 00:21:16.336 "small_cache_size": 128, 00:21:16.336 "large_cache_size": 16, 00:21:16.336 "task_count": 2048, 00:21:16.336 "sequence_count": 2048, 00:21:16.336 "buf_count": 2048 00:21:16.336 } 00:21:16.336 } 00:21:16.336 ] 00:21:16.336 }, 00:21:16.336 { 00:21:16.336 "subsystem": "bdev", 00:21:16.336 "config": [ 00:21:16.336 { 00:21:16.336 "method": "bdev_set_options", 00:21:16.336 "params": { 00:21:16.336 "bdev_io_pool_size": 65535, 00:21:16.336 "bdev_io_cache_size": 256, 00:21:16.336 "bdev_auto_examine": true, 00:21:16.336 "iobuf_small_cache_size": 128, 00:21:16.336 "iobuf_large_cache_size": 16 00:21:16.336 } 00:21:16.336 }, 00:21:16.336 { 00:21:16.336 "method": "bdev_raid_set_options", 00:21:16.336 "params": { 00:21:16.336 "process_window_size_kb": 1024, 00:21:16.336 "process_max_bandwidth_mb_sec": 0 00:21:16.336 } 00:21:16.336 }, 00:21:16.336 { 00:21:16.336 "method": "bdev_iscsi_set_options", 00:21:16.336 "params": { 00:21:16.336 "timeout_sec": 30 00:21:16.336 } 00:21:16.336 }, 00:21:16.336 { 00:21:16.336 "method": "bdev_nvme_set_options", 00:21:16.336 "params": { 00:21:16.336 "action_on_timeout": "none", 00:21:16.336 "timeout_us": 0, 00:21:16.336 "timeout_admin_us": 0, 00:21:16.336 "keep_alive_timeout_ms": 10000, 00:21:16.336 "arbitration_burst": 0, 00:21:16.336 "low_priority_weight": 0, 00:21:16.336 "medium_priority_weight": 0, 00:21:16.336 "high_priority_weight": 0, 00:21:16.336 "nvme_adminq_poll_period_us": 10000, 00:21:16.336 "nvme_ioq_poll_period_us": 0, 00:21:16.336 "io_queue_requests": 0, 00:21:16.336 "delay_cmd_submit": true, 00:21:16.336 "transport_retry_count": 4, 00:21:16.336 "bdev_retry_count": 3, 00:21:16.336 "transport_ack_timeout": 0, 00:21:16.336 "ctrlr_loss_timeout_sec": 0, 00:21:16.336 "reconnect_delay_sec": 0, 00:21:16.336 "fast_io_fail_timeout_sec": 0, 00:21:16.336 "disable_auto_failback": false, 00:21:16.336 "generate_uuids": false, 00:21:16.336 "transport_tos": 0, 00:21:16.336 "nvme_error_stat": false, 00:21:16.336 "rdma_srq_size": 0, 00:21:16.336 "io_path_stat": false, 00:21:16.336 "allow_accel_sequence": false, 00:21:16.336 "rdma_max_cq_size": 0, 00:21:16.336 "rdma_cm_event_timeout_ms": 0, 00:21:16.336 "dhchap_digests": [ 00:21:16.336 "sha256", 00:21:16.336 "sha384", 00:21:16.336 "sha512" 00:21:16.336 ], 00:21:16.336 "dhchap_dhgroups": [ 00:21:16.336 "null", 00:21:16.336 "ffdhe2048", 00:21:16.336 "ffdhe3072", 00:21:16.336 "ffdhe4096", 00:21:16.336 "ffdhe6144", 00:21:16.336 "ffdhe8192" 00:21:16.336 ] 00:21:16.336 } 00:21:16.336 }, 00:21:16.336 { 00:21:16.336 "method": "bdev_nvme_set_hotplug", 00:21:16.336 "params": { 00:21:16.336 "period_us": 100000, 00:21:16.336 "enable": false 00:21:16.336 } 00:21:16.336 }, 00:21:16.336 { 00:21:16.336 "method": "bdev_malloc_create", 00:21:16.336 "params": { 00:21:16.336 "name": "malloc0", 00:21:16.336 "num_blocks": 8192, 00:21:16.336 "block_size": 4096, 00:21:16.336 "physical_block_size": 4096, 00:21:16.336 "uuid": "f2e6ad94-c92c-4928-b925-689d9fd66ad9", 00:21:16.336 "optimal_io_boundary": 0, 00:21:16.336 "md_size": 0, 00:21:16.336 "dif_type": 0, 00:21:16.336 "dif_is_head_of_md": false, 00:21:16.336 "dif_pi_format": 0 00:21:16.336 } 00:21:16.336 }, 00:21:16.336 { 00:21:16.336 "method": "bdev_wait_for_examine" 00:21:16.336 } 00:21:16.336 ] 00:21:16.336 }, 00:21:16.336 { 00:21:16.336 "subsystem": "nbd", 00:21:16.336 "config": [] 00:21:16.336 }, 00:21:16.336 { 00:21:16.336 "subsystem": "scheduler", 00:21:16.336 "config": [ 00:21:16.336 { 00:21:16.336 "method": "framework_set_scheduler", 00:21:16.336 "params": { 00:21:16.336 "name": "static" 00:21:16.336 } 00:21:16.336 } 00:21:16.336 ] 00:21:16.336 }, 00:21:16.336 { 00:21:16.336 "subsystem": "nvmf", 00:21:16.336 "config": [ 00:21:16.336 { 00:21:16.336 "method": "nvmf_set_config", 00:21:16.336 "params": { 00:21:16.336 "discovery_filter": "match_any", 00:21:16.336 "admin_cmd_passthru": { 00:21:16.336 "identify_ctrlr": false 00:21:16.336 }, 00:21:16.336 "dhchap_digests": [ 00:21:16.336 "sha256", 00:21:16.336 "sha384", 00:21:16.336 "sha512" 00:21:16.336 ], 00:21:16.336 "dhchap_dhgroups": [ 00:21:16.336 "null", 00:21:16.336 "ffdhe2048", 00:21:16.336 "ffdhe3072", 00:21:16.336 "ffdhe4096", 00:21:16.336 "ffdhe6144", 00:21:16.336 "ffdhe8192" 00:21:16.336 ] 00:21:16.336 } 00:21:16.336 }, 00:21:16.336 { 00:21:16.336 "method": "nvmf_set_max_subsystems", 00:21:16.336 "params": { 00:21:16.336 "max_subsystems": 1024 00:21:16.336 } 00:21:16.336 }, 00:21:16.336 { 00:21:16.336 "method": "nvmf_set_crdt", 00:21:16.336 "params": { 00:21:16.336 "crdt1": 0, 00:21:16.336 "crdt2": 0, 00:21:16.336 "crdt3": 0 00:21:16.336 } 00:21:16.336 }, 00:21:16.336 { 00:21:16.336 "method": "nvmf_create_transport", 00:21:16.336 "params": { 00:21:16.336 "trtype": "TCP", 00:21:16.336 "max_queue_depth": 128, 00:21:16.336 "max_io_qpairs_per_ctrlr": 127, 00:21:16.336 "in_capsule_data_size": 4096, 00:21:16.336 "max_io_size": 131072, 00:21:16.336 "io_unit_size": 131072, 00:21:16.336 "max_aq_depth": 128, 00:21:16.336 "num_shared_buffers": 511, 00:21:16.336 "buf_cache_size": 4294967295, 00:21:16.336 "dif_insert_or_strip": false, 00:21:16.336 "zcopy": false, 00:21:16.336 "c2h_success": false, 00:21:16.336 "sock_priority": 0, 00:21:16.336 "abort_timeout_sec": 1, 00:21:16.336 "ack_timeout": 0, 00:21:16.336 "data_wr_pool_size": 0 00:21:16.336 } 00:21:16.336 }, 00:21:16.336 { 00:21:16.336 "method": "nvmf_create_subsystem", 00:21:16.336 "params": { 00:21:16.336 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.336 "allow_any_host": false, 00:21:16.336 "serial_number": "SPDK00000000000001", 00:21:16.336 "model_number": "SPDK bdev Controller", 00:21:16.336 "max_namespaces": 10, 00:21:16.336 "min_cntlid": 1, 00:21:16.336 "max_cntlid": 65519, 00:21:16.336 "ana_reporting": false 00:21:16.336 } 00:21:16.336 }, 00:21:16.336 { 00:21:16.336 "method": "nvmf_subsystem_add_host", 00:21:16.337 "params": { 00:21:16.337 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.337 "host": "nqn.2016-06.io.spdk:host1", 00:21:16.337 "psk": "key0" 00:21:16.337 } 00:21:16.337 }, 00:21:16.337 { 00:21:16.337 "method": "nvmf_subsystem_add_ns", 00:21:16.337 "params": { 00:21:16.337 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.337 "namespace": { 00:21:16.337 "nsid": 1, 00:21:16.337 "bdev_name": "malloc0", 00:21:16.337 "nguid": "F2E6AD94C92C4928B925689D9FD66AD9", 00:21:16.337 "uuid": "f2e6ad94-c92c-4928-b925-689d9fd66ad9", 00:21:16.337 "no_auto_visible": false 00:21:16.337 } 00:21:16.337 } 00:21:16.337 }, 00:21:16.337 { 00:21:16.337 "method": "nvmf_subsystem_add_listener", 00:21:16.337 "params": { 00:21:16.337 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.337 "listen_address": { 00:21:16.337 "trtype": "TCP", 00:21:16.337 "adrfam": "IPv4", 00:21:16.337 "traddr": "10.0.0.2", 00:21:16.337 "trsvcid": "4420" 00:21:16.337 }, 00:21:16.337 "secure_channel": true 00:21:16.337 } 00:21:16.337 } 00:21:16.337 ] 00:21:16.337 } 00:21:16.337 ] 00:21:16.337 }' 00:21:16.337 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:16.599 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:21:16.599 "subsystems": [ 00:21:16.599 { 00:21:16.599 "subsystem": "keyring", 00:21:16.599 "config": [ 00:21:16.599 { 00:21:16.599 "method": "keyring_file_add_key", 00:21:16.599 "params": { 00:21:16.599 "name": "key0", 00:21:16.599 "path": "/tmp/tmp.C9ubOQ6oap" 00:21:16.599 } 00:21:16.599 } 00:21:16.599 ] 00:21:16.599 }, 00:21:16.599 { 00:21:16.599 "subsystem": "iobuf", 00:21:16.599 "config": [ 00:21:16.599 { 00:21:16.599 "method": "iobuf_set_options", 00:21:16.599 "params": { 00:21:16.599 "small_pool_count": 8192, 00:21:16.599 "large_pool_count": 1024, 00:21:16.599 "small_bufsize": 8192, 00:21:16.599 "large_bufsize": 135168, 00:21:16.599 "enable_numa": false 00:21:16.599 } 00:21:16.599 } 00:21:16.599 ] 00:21:16.599 }, 00:21:16.599 { 00:21:16.599 "subsystem": "sock", 00:21:16.599 "config": [ 00:21:16.599 { 00:21:16.599 "method": "sock_set_default_impl", 00:21:16.599 "params": { 00:21:16.599 "impl_name": "posix" 00:21:16.599 } 00:21:16.599 }, 00:21:16.599 { 00:21:16.599 "method": "sock_impl_set_options", 00:21:16.599 "params": { 00:21:16.599 "impl_name": "ssl", 00:21:16.599 "recv_buf_size": 4096, 00:21:16.599 "send_buf_size": 4096, 00:21:16.599 "enable_recv_pipe": true, 00:21:16.599 "enable_quickack": false, 00:21:16.599 "enable_placement_id": 0, 00:21:16.599 "enable_zerocopy_send_server": true, 00:21:16.599 "enable_zerocopy_send_client": false, 00:21:16.599 "zerocopy_threshold": 0, 00:21:16.599 "tls_version": 0, 00:21:16.599 "enable_ktls": false 00:21:16.599 } 00:21:16.599 }, 00:21:16.599 { 00:21:16.599 "method": "sock_impl_set_options", 00:21:16.599 "params": { 00:21:16.599 "impl_name": "posix", 00:21:16.599 "recv_buf_size": 2097152, 00:21:16.599 "send_buf_size": 2097152, 00:21:16.599 "enable_recv_pipe": true, 00:21:16.599 "enable_quickack": false, 00:21:16.599 "enable_placement_id": 0, 00:21:16.599 "enable_zerocopy_send_server": true, 00:21:16.599 "enable_zerocopy_send_client": false, 00:21:16.599 "zerocopy_threshold": 0, 00:21:16.599 "tls_version": 0, 00:21:16.599 "enable_ktls": false 00:21:16.599 } 00:21:16.599 } 00:21:16.599 ] 00:21:16.599 }, 00:21:16.599 { 00:21:16.599 "subsystem": "vmd", 00:21:16.599 "config": [] 00:21:16.599 }, 00:21:16.599 { 00:21:16.599 "subsystem": "accel", 00:21:16.599 "config": [ 00:21:16.599 { 00:21:16.599 "method": "accel_set_options", 00:21:16.599 "params": { 00:21:16.599 "small_cache_size": 128, 00:21:16.599 "large_cache_size": 16, 00:21:16.599 "task_count": 2048, 00:21:16.599 "sequence_count": 2048, 00:21:16.599 "buf_count": 2048 00:21:16.599 } 00:21:16.599 } 00:21:16.599 ] 00:21:16.599 }, 00:21:16.599 { 00:21:16.599 "subsystem": "bdev", 00:21:16.599 "config": [ 00:21:16.599 { 00:21:16.599 "method": "bdev_set_options", 00:21:16.599 "params": { 00:21:16.599 "bdev_io_pool_size": 65535, 00:21:16.599 "bdev_io_cache_size": 256, 00:21:16.599 "bdev_auto_examine": true, 00:21:16.599 "iobuf_small_cache_size": 128, 00:21:16.599 "iobuf_large_cache_size": 16 00:21:16.599 } 00:21:16.599 }, 00:21:16.599 { 00:21:16.599 "method": "bdev_raid_set_options", 00:21:16.599 "params": { 00:21:16.599 "process_window_size_kb": 1024, 00:21:16.599 "process_max_bandwidth_mb_sec": 0 00:21:16.599 } 00:21:16.599 }, 00:21:16.599 { 00:21:16.599 "method": "bdev_iscsi_set_options", 00:21:16.599 "params": { 00:21:16.599 "timeout_sec": 30 00:21:16.599 } 00:21:16.599 }, 00:21:16.599 { 00:21:16.599 "method": "bdev_nvme_set_options", 00:21:16.599 "params": { 00:21:16.599 "action_on_timeout": "none", 00:21:16.599 "timeout_us": 0, 00:21:16.599 "timeout_admin_us": 0, 00:21:16.599 "keep_alive_timeout_ms": 10000, 00:21:16.599 "arbitration_burst": 0, 00:21:16.599 "low_priority_weight": 0, 00:21:16.599 "medium_priority_weight": 0, 00:21:16.599 "high_priority_weight": 0, 00:21:16.599 "nvme_adminq_poll_period_us": 10000, 00:21:16.599 "nvme_ioq_poll_period_us": 0, 00:21:16.599 "io_queue_requests": 512, 00:21:16.599 "delay_cmd_submit": true, 00:21:16.599 "transport_retry_count": 4, 00:21:16.599 "bdev_retry_count": 3, 00:21:16.599 "transport_ack_timeout": 0, 00:21:16.599 "ctrlr_loss_timeout_sec": 0, 00:21:16.599 "reconnect_delay_sec": 0, 00:21:16.599 "fast_io_fail_timeout_sec": 0, 00:21:16.599 "disable_auto_failback": false, 00:21:16.599 "generate_uuids": false, 00:21:16.599 "transport_tos": 0, 00:21:16.599 "nvme_error_stat": false, 00:21:16.599 "rdma_srq_size": 0, 00:21:16.599 "io_path_stat": false, 00:21:16.599 "allow_accel_sequence": false, 00:21:16.599 "rdma_max_cq_size": 0, 00:21:16.599 "rdma_cm_event_timeout_ms": 0, 00:21:16.599 "dhchap_digests": [ 00:21:16.599 "sha256", 00:21:16.599 "sha384", 00:21:16.599 "sha512" 00:21:16.599 ], 00:21:16.599 "dhchap_dhgroups": [ 00:21:16.599 "null", 00:21:16.599 "ffdhe2048", 00:21:16.599 "ffdhe3072", 00:21:16.599 "ffdhe4096", 00:21:16.599 "ffdhe6144", 00:21:16.599 "ffdhe8192" 00:21:16.599 ] 00:21:16.599 } 00:21:16.600 }, 00:21:16.600 { 00:21:16.600 "method": "bdev_nvme_attach_controller", 00:21:16.600 "params": { 00:21:16.600 "name": "TLSTEST", 00:21:16.600 "trtype": "TCP", 00:21:16.600 "adrfam": "IPv4", 00:21:16.600 "traddr": "10.0.0.2", 00:21:16.600 "trsvcid": "4420", 00:21:16.600 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.600 "prchk_reftag": false, 00:21:16.600 "prchk_guard": false, 00:21:16.600 "ctrlr_loss_timeout_sec": 0, 00:21:16.600 "reconnect_delay_sec": 0, 00:21:16.600 "fast_io_fail_timeout_sec": 0, 00:21:16.600 "psk": "key0", 00:21:16.600 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:16.600 "hdgst": false, 00:21:16.600 "ddgst": false, 00:21:16.600 "multipath": "multipath" 00:21:16.600 } 00:21:16.600 }, 00:21:16.600 { 00:21:16.600 "method": "bdev_nvme_set_hotplug", 00:21:16.600 "params": { 00:21:16.600 "period_us": 100000, 00:21:16.600 "enable": false 00:21:16.600 } 00:21:16.600 }, 00:21:16.600 { 00:21:16.600 "method": "bdev_wait_for_examine" 00:21:16.600 } 00:21:16.600 ] 00:21:16.600 }, 00:21:16.600 { 00:21:16.600 "subsystem": "nbd", 00:21:16.600 "config": [] 00:21:16.600 } 00:21:16.600 ] 00:21:16.600 }' 00:21:16.600 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1991875 00:21:16.600 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1991875 ']' 00:21:16.600 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1991875 00:21:16.600 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:16.600 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:16.600 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1991875 00:21:16.600 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:16.600 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:16.600 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1991875' 00:21:16.600 killing process with pid 1991875 00:21:16.600 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1991875 00:21:16.600 Received shutdown signal, test time was about 10.000000 seconds 00:21:16.600 00:21:16.600 Latency(us) 00:21:16.600 [2024-11-28T07:20:13.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.600 [2024-11-28T07:20:13.889Z] =================================================================================================================== 00:21:16.600 [2024-11-28T07:20:13.889Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:16.600 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1991875 00:21:16.600 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1991412 00:21:16.600 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1991412 ']' 00:21:16.600 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1991412 00:21:16.600 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:16.600 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:16.600 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1991412 00:21:16.600 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:16.600 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:16.600 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1991412' 00:21:16.600 killing process with pid 1991412 00:21:16.600 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1991412 00:21:16.863 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1991412 00:21:16.863 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:16.864 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:16.864 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:16.864 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:21:16.864 "subsystems": [ 00:21:16.864 { 00:21:16.864 "subsystem": "keyring", 00:21:16.864 "config": [ 00:21:16.864 { 00:21:16.864 "method": "keyring_file_add_key", 00:21:16.864 "params": { 00:21:16.864 "name": "key0", 00:21:16.864 "path": "/tmp/tmp.C9ubOQ6oap" 00:21:16.864 } 00:21:16.864 } 00:21:16.864 ] 00:21:16.864 }, 00:21:16.864 { 00:21:16.864 "subsystem": "iobuf", 00:21:16.864 "config": [ 00:21:16.864 { 00:21:16.864 "method": "iobuf_set_options", 00:21:16.864 "params": { 00:21:16.864 "small_pool_count": 8192, 00:21:16.864 "large_pool_count": 1024, 00:21:16.864 "small_bufsize": 8192, 00:21:16.864 "large_bufsize": 135168, 00:21:16.864 "enable_numa": false 00:21:16.864 } 00:21:16.864 } 00:21:16.864 ] 00:21:16.864 }, 00:21:16.864 { 00:21:16.864 "subsystem": "sock", 00:21:16.864 "config": [ 00:21:16.864 { 00:21:16.864 "method": "sock_set_default_impl", 00:21:16.864 "params": { 00:21:16.864 "impl_name": "posix" 00:21:16.864 } 00:21:16.864 }, 00:21:16.864 { 00:21:16.864 "method": "sock_impl_set_options", 00:21:16.864 "params": { 00:21:16.864 "impl_name": "ssl", 00:21:16.864 "recv_buf_size": 4096, 00:21:16.864 "send_buf_size": 4096, 00:21:16.864 "enable_recv_pipe": true, 00:21:16.864 "enable_quickack": false, 00:21:16.864 "enable_placement_id": 0, 00:21:16.864 "enable_zerocopy_send_server": true, 00:21:16.864 "enable_zerocopy_send_client": false, 00:21:16.864 "zerocopy_threshold": 0, 00:21:16.864 "tls_version": 0, 00:21:16.864 "enable_ktls": false 00:21:16.864 } 00:21:16.864 }, 00:21:16.864 { 00:21:16.864 "method": "sock_impl_set_options", 00:21:16.864 "params": { 00:21:16.864 "impl_name": "posix", 00:21:16.864 "recv_buf_size": 2097152, 00:21:16.864 "send_buf_size": 2097152, 00:21:16.864 "enable_recv_pipe": true, 00:21:16.864 "enable_quickack": false, 00:21:16.864 "enable_placement_id": 0, 00:21:16.864 "enable_zerocopy_send_server": true, 00:21:16.864 "enable_zerocopy_send_client": false, 00:21:16.864 "zerocopy_threshold": 0, 00:21:16.864 "tls_version": 0, 00:21:16.864 "enable_ktls": false 00:21:16.864 } 00:21:16.864 } 00:21:16.864 ] 00:21:16.864 }, 00:21:16.864 { 00:21:16.864 "subsystem": "vmd", 00:21:16.864 "config": [] 00:21:16.864 }, 00:21:16.864 { 00:21:16.864 "subsystem": "accel", 00:21:16.864 "config": [ 00:21:16.864 { 00:21:16.864 "method": "accel_set_options", 00:21:16.864 "params": { 00:21:16.864 "small_cache_size": 128, 00:21:16.864 "large_cache_size": 16, 00:21:16.864 "task_count": 2048, 00:21:16.864 "sequence_count": 2048, 00:21:16.864 "buf_count": 2048 00:21:16.864 } 00:21:16.864 } 00:21:16.864 ] 00:21:16.864 }, 00:21:16.864 { 00:21:16.864 "subsystem": "bdev", 00:21:16.864 "config": [ 00:21:16.864 { 00:21:16.864 "method": "bdev_set_options", 00:21:16.864 "params": { 00:21:16.864 "bdev_io_pool_size": 65535, 00:21:16.864 "bdev_io_cache_size": 256, 00:21:16.864 "bdev_auto_examine": true, 00:21:16.864 "iobuf_small_cache_size": 128, 00:21:16.864 "iobuf_large_cache_size": 16 00:21:16.864 } 00:21:16.864 }, 00:21:16.864 { 00:21:16.864 "method": "bdev_raid_set_options", 00:21:16.864 "params": { 00:21:16.864 "process_window_size_kb": 1024, 00:21:16.864 "process_max_bandwidth_mb_sec": 0 00:21:16.864 } 00:21:16.864 }, 00:21:16.864 { 00:21:16.864 "method": "bdev_iscsi_set_options", 00:21:16.864 "params": { 00:21:16.864 "timeout_sec": 30 00:21:16.864 } 00:21:16.864 }, 00:21:16.864 { 00:21:16.864 "method": "bdev_nvme_set_options", 00:21:16.864 "params": { 00:21:16.864 "action_on_timeout": "none", 00:21:16.864 "timeout_us": 0, 00:21:16.864 "timeout_admin_us": 0, 00:21:16.864 "keep_alive_timeout_ms": 10000, 00:21:16.864 "arbitration_burst": 0, 00:21:16.864 "low_priority_weight": 0, 00:21:16.864 "medium_priority_weight": 0, 00:21:16.864 "high_priority_weight": 0, 00:21:16.864 "nvme_adminq_poll_period_us": 10000, 00:21:16.864 "nvme_ioq_poll_period_us": 0, 00:21:16.864 "io_queue_requests": 0, 00:21:16.864 "delay_cmd_submit": true, 00:21:16.864 "transport_retry_count": 4, 00:21:16.864 "bdev_retry_count": 3, 00:21:16.864 "transport_ack_timeout": 0, 00:21:16.864 "ctrlr_loss_timeout_sec": 0, 00:21:16.864 "reconnect_delay_sec": 0, 00:21:16.864 "fast_io_fail_timeout_sec": 0, 00:21:16.864 "disable_auto_failback": false, 00:21:16.864 "generate_uuids": false, 00:21:16.864 "transport_tos": 0, 00:21:16.864 "nvme_error_stat": false, 00:21:16.864 "rdma_srq_size": 0, 00:21:16.864 "io_path_stat": false, 00:21:16.864 "allow_accel_sequence": false, 00:21:16.864 "rdma_max_cq_size": 0, 00:21:16.864 "rdma_cm_event_timeout_ms": 0, 00:21:16.864 "dhchap_digests": [ 00:21:16.864 "sha256", 00:21:16.864 "sha384", 00:21:16.864 "sha512" 00:21:16.864 ], 00:21:16.864 "dhchap_dhgroups": [ 00:21:16.864 "null", 00:21:16.864 "ffdhe2048", 00:21:16.864 "ffdhe3072", 00:21:16.864 "ffdhe4096", 00:21:16.864 "ffdhe6144", 00:21:16.864 "ffdhe8192" 00:21:16.864 ] 00:21:16.864 } 00:21:16.864 }, 00:21:16.864 { 00:21:16.864 "method": "bdev_nvme_set_hotplug", 00:21:16.864 "params": { 00:21:16.864 "period_us": 100000, 00:21:16.864 "enable": false 00:21:16.864 } 00:21:16.864 }, 00:21:16.864 { 00:21:16.864 "method": "bdev_malloc_create", 00:21:16.864 "params": { 00:21:16.864 "name": "malloc0", 00:21:16.864 "num_blocks": 8192, 00:21:16.864 "block_size": 4096, 00:21:16.864 "physical_block_size": 4096, 00:21:16.864 "uuid": "f2e6ad94-c92c-4928-b925-689d9fd66ad9", 00:21:16.864 "optimal_io_boundary": 0, 00:21:16.864 "md_size": 0, 00:21:16.864 "dif_type": 0, 00:21:16.864 "dif_is_head_of_md": false, 00:21:16.864 "dif_pi_format": 0 00:21:16.864 } 00:21:16.864 }, 00:21:16.864 { 00:21:16.864 "method": "bdev_wait_for_examine" 00:21:16.864 } 00:21:16.864 ] 00:21:16.864 }, 00:21:16.864 { 00:21:16.864 "subsystem": "nbd", 00:21:16.864 "config": [] 00:21:16.864 }, 00:21:16.864 { 00:21:16.864 "subsystem": "scheduler", 00:21:16.864 "config": [ 00:21:16.864 { 00:21:16.864 "method": "framework_set_scheduler", 00:21:16.864 "params": { 00:21:16.864 "name": "static" 00:21:16.864 } 00:21:16.864 } 00:21:16.864 ] 00:21:16.864 }, 00:21:16.864 { 00:21:16.864 "subsystem": "nvmf", 00:21:16.864 "config": [ 00:21:16.864 { 00:21:16.864 "method": "nvmf_set_config", 00:21:16.864 "params": { 00:21:16.864 "discovery_filter": "match_any", 00:21:16.864 "admin_cmd_passthru": { 00:21:16.864 "identify_ctrlr": false 00:21:16.864 }, 00:21:16.864 "dhchap_digests": [ 00:21:16.864 "sha256", 00:21:16.864 "sha384", 00:21:16.864 "sha512" 00:21:16.864 ], 00:21:16.864 "dhchap_dhgroups": [ 00:21:16.864 "null", 00:21:16.864 "ffdhe2048", 00:21:16.864 "ffdhe3072", 00:21:16.864 "ffdhe4096", 00:21:16.864 "ffdhe6144", 00:21:16.864 "ffdhe8192" 00:21:16.864 ] 00:21:16.864 } 00:21:16.864 }, 00:21:16.864 { 00:21:16.864 "method": "nvmf_set_max_subsystems", 00:21:16.864 "params": { 00:21:16.864 "max_subsystems": 1024 00:21:16.864 } 00:21:16.864 }, 00:21:16.864 { 00:21:16.864 "method": "nvmf_set_crdt", 00:21:16.864 "params": { 00:21:16.864 "crdt1": 0, 00:21:16.865 "crdt2": 0, 00:21:16.865 "crdt3": 0 00:21:16.865 } 00:21:16.865 }, 00:21:16.865 { 00:21:16.865 "method": "nvmf_create_transport", 00:21:16.865 "params": { 00:21:16.865 "trtype": "TCP", 00:21:16.865 "max_queue_depth": 128, 00:21:16.865 "max_io_qpairs_per_ctrlr": 127, 00:21:16.865 "in_capsule_data_size": 4096, 00:21:16.865 "max_io_size": 131072, 00:21:16.865 "io_unit_size": 131072, 00:21:16.865 "max_aq_depth": 128, 00:21:16.865 "num_shared_buffers": 511, 00:21:16.865 "buf_cache_size": 4294967295, 00:21:16.865 "dif_insert_or_strip": false, 00:21:16.865 "zcopy": false, 00:21:16.865 "c2h_success": false, 00:21:16.865 "sock_priority": 0, 00:21:16.865 "abort_timeout_sec": 1, 00:21:16.865 "ack_timeout": 0, 00:21:16.865 "data_wr_pool_size": 0 00:21:16.865 } 00:21:16.865 }, 00:21:16.865 { 00:21:16.865 "method": "nvmf_create_subsystem", 00:21:16.865 "params": { 00:21:16.865 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.865 "allow_any_host": false, 00:21:16.865 "serial_number": "SPDK00000000000001", 00:21:16.865 "model_number": "SPDK bdev Controller", 00:21:16.865 "max_namespaces": 10, 00:21:16.865 "min_cntlid": 1, 00:21:16.865 "max_cntlid": 65519, 00:21:16.865 "ana_reporting": false 00:21:16.865 } 00:21:16.865 }, 00:21:16.865 { 00:21:16.865 "method": "nvmf_subsystem_add_host", 00:21:16.865 "params": { 00:21:16.865 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.865 "host": "nqn.2016-06.io.spdk:host1", 00:21:16.865 "psk": "key0" 00:21:16.865 } 00:21:16.865 }, 00:21:16.865 { 00:21:16.865 "method": "nvmf_subsystem_add_ns", 00:21:16.865 "params": { 00:21:16.865 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.865 "namespace": { 00:21:16.865 "nsid": 1, 00:21:16.865 "bdev_name": "malloc0", 00:21:16.865 "nguid": "F2E6AD94C92C4928B925689D9FD66AD9", 00:21:16.865 "uuid": "f2e6ad94-c92c-4928-b925-689d9fd66ad9", 00:21:16.865 "no_auto_visible": false 00:21:16.865 } 00:21:16.865 } 00:21:16.865 }, 00:21:16.865 { 00:21:16.865 "method": "nvmf_subsystem_add_listener", 00:21:16.865 "params": { 00:21:16.865 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.865 "listen_address": { 00:21:16.865 "trtype": "TCP", 00:21:16.865 "adrfam": "IPv4", 00:21:16.865 "traddr": "10.0.0.2", 00:21:16.865 "trsvcid": "4420" 00:21:16.865 }, 00:21:16.865 "secure_channel": true 00:21:16.865 } 00:21:16.865 } 00:21:16.865 ] 00:21:16.865 } 00:21:16.865 ] 00:21:16.865 }' 00:21:16.865 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.865 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1992227 00:21:16.865 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1992227 00:21:16.865 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:16.865 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1992227 ']' 00:21:16.865 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.865 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:16.865 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.865 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:16.865 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.865 [2024-11-28 08:20:14.065088] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:21:16.865 [2024-11-28 08:20:14.065148] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.127 [2024-11-28 08:20:14.154695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.127 [2024-11-28 08:20:14.184294] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:17.127 [2024-11-28 08:20:14.184322] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:17.127 [2024-11-28 08:20:14.184328] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:17.127 [2024-11-28 08:20:14.184332] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:17.127 [2024-11-28 08:20:14.184339] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:17.127 [2024-11-28 08:20:14.184832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:17.127 [2024-11-28 08:20:14.378206] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:17.127 [2024-11-28 08:20:14.410231] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:17.127 [2024-11-28 08:20:14.410422] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:17.700 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:17.700 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:17.700 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:17.700 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:17.700 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.700 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:17.700 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1992263 00:21:17.700 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1992263 /var/tmp/bdevperf.sock 00:21:17.700 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1992263 ']' 00:21:17.700 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:17.700 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:17.700 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:17.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:17.700 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:17.700 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:17.700 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.700 08:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:21:17.700 "subsystems": [ 00:21:17.700 { 00:21:17.700 "subsystem": "keyring", 00:21:17.700 "config": [ 00:21:17.700 { 00:21:17.700 "method": "keyring_file_add_key", 00:21:17.700 "params": { 00:21:17.700 "name": "key0", 00:21:17.700 "path": "/tmp/tmp.C9ubOQ6oap" 00:21:17.700 } 00:21:17.700 } 00:21:17.700 ] 00:21:17.700 }, 00:21:17.700 { 00:21:17.700 "subsystem": "iobuf", 00:21:17.700 "config": [ 00:21:17.700 { 00:21:17.700 "method": "iobuf_set_options", 00:21:17.700 "params": { 00:21:17.700 "small_pool_count": 8192, 00:21:17.700 "large_pool_count": 1024, 00:21:17.700 "small_bufsize": 8192, 00:21:17.700 "large_bufsize": 135168, 00:21:17.700 "enable_numa": false 00:21:17.700 } 00:21:17.700 } 00:21:17.700 ] 00:21:17.700 }, 00:21:17.701 { 00:21:17.701 "subsystem": "sock", 00:21:17.701 "config": [ 00:21:17.701 { 00:21:17.701 "method": "sock_set_default_impl", 00:21:17.701 "params": { 00:21:17.701 "impl_name": "posix" 00:21:17.701 } 00:21:17.701 }, 00:21:17.701 { 00:21:17.701 "method": "sock_impl_set_options", 00:21:17.701 "params": { 00:21:17.701 "impl_name": "ssl", 00:21:17.701 "recv_buf_size": 4096, 00:21:17.701 "send_buf_size": 4096, 00:21:17.701 "enable_recv_pipe": true, 00:21:17.701 "enable_quickack": false, 00:21:17.701 "enable_placement_id": 0, 00:21:17.701 "enable_zerocopy_send_server": true, 00:21:17.701 "enable_zerocopy_send_client": false, 00:21:17.701 "zerocopy_threshold": 0, 00:21:17.701 "tls_version": 0, 00:21:17.701 "enable_ktls": false 00:21:17.701 } 00:21:17.701 }, 00:21:17.701 { 00:21:17.701 "method": "sock_impl_set_options", 00:21:17.701 "params": { 00:21:17.701 "impl_name": "posix", 00:21:17.701 "recv_buf_size": 2097152, 00:21:17.701 "send_buf_size": 2097152, 00:21:17.701 "enable_recv_pipe": true, 00:21:17.701 "enable_quickack": false, 00:21:17.701 "enable_placement_id": 0, 00:21:17.701 "enable_zerocopy_send_server": true, 00:21:17.701 "enable_zerocopy_send_client": false, 00:21:17.701 "zerocopy_threshold": 0, 00:21:17.701 "tls_version": 0, 00:21:17.701 "enable_ktls": false 00:21:17.701 } 00:21:17.701 } 00:21:17.701 ] 00:21:17.701 }, 00:21:17.701 { 00:21:17.701 "subsystem": "vmd", 00:21:17.701 "config": [] 00:21:17.701 }, 00:21:17.701 { 00:21:17.701 "subsystem": "accel", 00:21:17.701 "config": [ 00:21:17.701 { 00:21:17.701 "method": "accel_set_options", 00:21:17.701 "params": { 00:21:17.701 "small_cache_size": 128, 00:21:17.701 "large_cache_size": 16, 00:21:17.701 "task_count": 2048, 00:21:17.701 "sequence_count": 2048, 00:21:17.701 "buf_count": 2048 00:21:17.701 } 00:21:17.701 } 00:21:17.701 ] 00:21:17.701 }, 00:21:17.701 { 00:21:17.701 "subsystem": "bdev", 00:21:17.701 "config": [ 00:21:17.701 { 00:21:17.701 "method": "bdev_set_options", 00:21:17.701 "params": { 00:21:17.701 "bdev_io_pool_size": 65535, 00:21:17.701 "bdev_io_cache_size": 256, 00:21:17.701 "bdev_auto_examine": true, 00:21:17.701 "iobuf_small_cache_size": 128, 00:21:17.701 "iobuf_large_cache_size": 16 00:21:17.701 } 00:21:17.701 }, 00:21:17.701 { 00:21:17.701 "method": "bdev_raid_set_options", 00:21:17.701 "params": { 00:21:17.701 "process_window_size_kb": 1024, 00:21:17.701 "process_max_bandwidth_mb_sec": 0 00:21:17.701 } 00:21:17.701 }, 00:21:17.701 { 00:21:17.701 "method": "bdev_iscsi_set_options", 00:21:17.701 "params": { 00:21:17.701 "timeout_sec": 30 00:21:17.701 } 00:21:17.701 }, 00:21:17.701 { 00:21:17.701 "method": "bdev_nvme_set_options", 00:21:17.701 "params": { 00:21:17.701 "action_on_timeout": "none", 00:21:17.701 "timeout_us": 0, 00:21:17.701 "timeout_admin_us": 0, 00:21:17.701 "keep_alive_timeout_ms": 10000, 00:21:17.701 "arbitration_burst": 0, 00:21:17.701 "low_priority_weight": 0, 00:21:17.701 "medium_priority_weight": 0, 00:21:17.701 "high_priority_weight": 0, 00:21:17.701 "nvme_adminq_poll_period_us": 10000, 00:21:17.701 "nvme_ioq_poll_period_us": 0, 00:21:17.701 "io_queue_requests": 512, 00:21:17.701 "delay_cmd_submit": true, 00:21:17.701 "transport_retry_count": 4, 00:21:17.701 "bdev_retry_count": 3, 00:21:17.701 "transport_ack_timeout": 0, 00:21:17.701 "ctrlr_loss_timeout_sec": 0, 00:21:17.701 "reconnect_delay_sec": 0, 00:21:17.701 "fast_io_fail_timeout_sec": 0, 00:21:17.701 "disable_auto_failback": false, 00:21:17.701 "generate_uuids": false, 00:21:17.701 "transport_tos": 0, 00:21:17.701 "nvme_error_stat": false, 00:21:17.701 "rdma_srq_size": 0, 00:21:17.701 "io_path_stat": false, 00:21:17.701 "allow_accel_sequence": false, 00:21:17.701 "rdma_max_cq_size": 0, 00:21:17.701 "rdma_cm_event_timeout_ms": 0, 00:21:17.701 "dhchap_digests": [ 00:21:17.701 "sha256", 00:21:17.701 "sha384", 00:21:17.701 "sha512" 00:21:17.701 ], 00:21:17.701 "dhchap_dhgroups": [ 00:21:17.701 "null", 00:21:17.701 "ffdhe2048", 00:21:17.701 "ffdhe3072", 00:21:17.701 "ffdhe4096", 00:21:17.701 "ffdhe6144", 00:21:17.701 "ffdhe8192" 00:21:17.701 ] 00:21:17.701 } 00:21:17.701 }, 00:21:17.701 { 00:21:17.701 "method": "bdev_nvme_attach_controller", 00:21:17.701 "params": { 00:21:17.701 "name": "TLSTEST", 00:21:17.701 "trtype": "TCP", 00:21:17.701 "adrfam": "IPv4", 00:21:17.701 "traddr": "10.0.0.2", 00:21:17.701 "trsvcid": "4420", 00:21:17.701 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.701 "prchk_reftag": false, 00:21:17.701 "prchk_guard": false, 00:21:17.701 "ctrlr_loss_timeout_sec": 0, 00:21:17.701 "reconnect_delay_sec": 0, 00:21:17.701 "fast_io_fail_timeout_sec": 0, 00:21:17.701 "psk": "key0", 00:21:17.701 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:17.701 "hdgst": false, 00:21:17.701 "ddgst": false, 00:21:17.701 "multipath": "multipath" 00:21:17.701 } 00:21:17.701 }, 00:21:17.701 { 00:21:17.701 "method": "bdev_nvme_set_hotplug", 00:21:17.701 "params": { 00:21:17.701 "period_us": 100000, 00:21:17.701 "enable": false 00:21:17.701 } 00:21:17.701 }, 00:21:17.701 { 00:21:17.701 "method": "bdev_wait_for_examine" 00:21:17.701 } 00:21:17.701 ] 00:21:17.701 }, 00:21:17.701 { 00:21:17.701 "subsystem": "nbd", 00:21:17.701 "config": [] 00:21:17.701 } 00:21:17.701 ] 00:21:17.701 }' 00:21:17.701 [2024-11-28 08:20:14.931564] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:21:17.702 [2024-11-28 08:20:14.931620] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1992263 ] 00:21:17.962 [2024-11-28 08:20:15.017432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.962 [2024-11-28 08:20:15.046955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:17.962 [2024-11-28 08:20:15.182068] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:18.534 08:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:18.534 08:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:18.534 08:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:18.534 Running I/O for 10 seconds... 00:21:20.862 5045.00 IOPS, 19.71 MiB/s [2024-11-28T07:20:19.094Z] 5084.50 IOPS, 19.86 MiB/s [2024-11-28T07:20:20.036Z] 4667.00 IOPS, 18.23 MiB/s [2024-11-28T07:20:20.977Z] 5008.75 IOPS, 19.57 MiB/s [2024-11-28T07:20:21.922Z] 5191.80 IOPS, 20.28 MiB/s [2024-11-28T07:20:22.863Z] 5189.50 IOPS, 20.27 MiB/s [2024-11-28T07:20:24.247Z] 5279.29 IOPS, 20.62 MiB/s [2024-11-28T07:20:25.188Z] 5339.00 IOPS, 20.86 MiB/s [2024-11-28T07:20:26.129Z] 5326.56 IOPS, 20.81 MiB/s [2024-11-28T07:20:26.129Z] 5267.30 IOPS, 20.58 MiB/s 00:21:28.840 Latency(us) 00:21:28.840 [2024-11-28T07:20:26.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.840 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:28.840 Verification LBA range: start 0x0 length 0x2000 00:21:28.840 TLSTESTn1 : 10.05 5254.99 20.53 0.00 0.00 24291.92 4532.91 46530.56 00:21:28.840 [2024-11-28T07:20:26.129Z] =================================================================================================================== 00:21:28.840 [2024-11-28T07:20:26.129Z] Total : 5254.99 20.53 0.00 0.00 24291.92 4532.91 46530.56 00:21:28.840 { 00:21:28.840 "results": [ 00:21:28.840 { 00:21:28.840 "job": "TLSTESTn1", 00:21:28.840 "core_mask": "0x4", 00:21:28.840 "workload": "verify", 00:21:28.840 "status": "finished", 00:21:28.840 "verify_range": { 00:21:28.840 "start": 0, 00:21:28.840 "length": 8192 00:21:28.840 }, 00:21:28.840 "queue_depth": 128, 00:21:28.840 "io_size": 4096, 00:21:28.840 "runtime": 10.047588, 00:21:28.840 "iops": 5254.992541493541, 00:21:28.840 "mibps": 20.527314615209143, 00:21:28.840 "io_failed": 0, 00:21:28.840 "io_timeout": 0, 00:21:28.840 "avg_latency_us": 24291.919644444446, 00:21:28.840 "min_latency_us": 4532.906666666667, 00:21:28.840 "max_latency_us": 46530.56 00:21:28.840 } 00:21:28.840 ], 00:21:28.840 "core_count": 1 00:21:28.840 } 00:21:28.840 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:28.840 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1992263 00:21:28.840 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1992263 ']' 00:21:28.840 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1992263 00:21:28.840 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:28.840 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:28.840 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1992263 00:21:28.841 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:28.841 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:28.841 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1992263' 00:21:28.841 killing process with pid 1992263 00:21:28.841 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1992263 00:21:28.841 Received shutdown signal, test time was about 10.000000 seconds 00:21:28.841 00:21:28.841 Latency(us) 00:21:28.841 [2024-11-28T07:20:26.130Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.841 [2024-11-28T07:20:26.130Z] =================================================================================================================== 00:21:28.841 [2024-11-28T07:20:26.130Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:28.841 08:20:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1992263 00:21:28.841 08:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1992227 00:21:28.841 08:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1992227 ']' 00:21:28.841 08:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1992227 00:21:28.841 08:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:28.841 08:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:28.841 08:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1992227 00:21:29.101 08:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:29.101 08:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:29.101 08:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1992227' 00:21:29.101 killing process with pid 1992227 00:21:29.101 08:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1992227 00:21:29.101 08:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1992227 00:21:29.101 08:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:21:29.101 08:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:29.101 08:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:29.101 08:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:29.101 08:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1994600 00:21:29.101 08:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1994600 00:21:29.101 08:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:29.101 08:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1994600 ']' 00:21:29.101 08:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.101 08:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:29.101 08:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.101 08:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:29.101 08:20:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:29.101 [2024-11-28 08:20:26.324202] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:21:29.101 [2024-11-28 08:20:26.324263] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.442 [2024-11-28 08:20:26.421131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.442 [2024-11-28 08:20:26.469471] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:29.442 [2024-11-28 08:20:26.469534] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:29.442 [2024-11-28 08:20:26.469543] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:29.442 [2024-11-28 08:20:26.469550] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:29.442 [2024-11-28 08:20:26.469558] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:29.442 [2024-11-28 08:20:26.470371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.041 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:30.041 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:30.041 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:30.041 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:30.041 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.041 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.041 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.C9ubOQ6oap 00:21:30.041 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.C9ubOQ6oap 00:21:30.041 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:30.301 [2024-11-28 08:20:27.341868] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:30.301 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:30.301 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:30.562 [2024-11-28 08:20:27.742887] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:30.562 [2024-11-28 08:20:27.743252] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:30.562 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:30.822 malloc0 00:21:30.822 08:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:31.083 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.C9ubOQ6oap 00:21:31.083 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:31.343 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1994973 00:21:31.343 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:31.343 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:31.343 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1994973 /var/tmp/bdevperf.sock 00:21:31.343 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1994973 ']' 00:21:31.343 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:31.343 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:31.344 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:31.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:31.344 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:31.344 08:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.344 [2024-11-28 08:20:28.602578] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:21:31.344 [2024-11-28 08:20:28.602653] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1994973 ] 00:21:31.603 [2024-11-28 08:20:28.692416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.603 [2024-11-28 08:20:28.726204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.176 08:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:32.176 08:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:32.176 08:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.C9ubOQ6oap 00:21:32.437 08:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:32.697 [2024-11-28 08:20:29.753316] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:32.697 nvme0n1 00:21:32.697 08:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:32.697 Running I/O for 1 seconds... 00:21:34.082 3538.00 IOPS, 13.82 MiB/s 00:21:34.082 Latency(us) 00:21:34.082 [2024-11-28T07:20:31.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.082 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:34.082 Verification LBA range: start 0x0 length 0x2000 00:21:34.082 nvme0n1 : 1.02 3606.41 14.09 0.00 0.00 35240.16 5515.95 72089.60 00:21:34.082 [2024-11-28T07:20:31.371Z] =================================================================================================================== 00:21:34.082 [2024-11-28T07:20:31.371Z] Total : 3606.41 14.09 0.00 0.00 35240.16 5515.95 72089.60 00:21:34.082 { 00:21:34.082 "results": [ 00:21:34.082 { 00:21:34.082 "job": "nvme0n1", 00:21:34.082 "core_mask": "0x2", 00:21:34.082 "workload": "verify", 00:21:34.082 "status": "finished", 00:21:34.082 "verify_range": { 00:21:34.082 "start": 0, 00:21:34.082 "length": 8192 00:21:34.082 }, 00:21:34.082 "queue_depth": 128, 00:21:34.082 "io_size": 4096, 00:21:34.082 "runtime": 1.016522, 00:21:34.082 "iops": 3606.41481443589, 00:21:34.082 "mibps": 14.087557868890196, 00:21:34.082 "io_failed": 0, 00:21:34.082 "io_timeout": 0, 00:21:34.082 "avg_latency_us": 35240.16206583015, 00:21:34.082 "min_latency_us": 5515.946666666667, 00:21:34.082 "max_latency_us": 72089.6 00:21:34.082 } 00:21:34.082 ], 00:21:34.082 "core_count": 1 00:21:34.082 } 00:21:34.082 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1994973 00:21:34.082 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1994973 ']' 00:21:34.082 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1994973 00:21:34.082 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:34.082 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:34.082 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1994973 00:21:34.082 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:34.082 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:34.082 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1994973' 00:21:34.082 killing process with pid 1994973 00:21:34.082 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1994973 00:21:34.082 Received shutdown signal, test time was about 1.000000 seconds 00:21:34.082 00:21:34.082 Latency(us) 00:21:34.082 [2024-11-28T07:20:31.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.082 [2024-11-28T07:20:31.371Z] =================================================================================================================== 00:21:34.082 [2024-11-28T07:20:31.371Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:34.082 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1994973 00:21:34.082 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1994600 00:21:34.083 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1994600 ']' 00:21:34.083 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1994600 00:21:34.083 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:34.083 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:34.083 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1994600 00:21:34.083 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:34.083 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:34.083 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1994600' 00:21:34.083 killing process with pid 1994600 00:21:34.083 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1994600 00:21:34.083 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1994600 00:21:34.083 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:21:34.083 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:34.083 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:34.083 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.083 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1995610 00:21:34.083 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1995610 00:21:34.083 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:34.083 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1995610 ']' 00:21:34.083 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.083 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:34.083 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.083 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:34.083 08:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.345 [2024-11-28 08:20:31.415623] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:21:34.345 [2024-11-28 08:20:31.415692] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:34.345 [2024-11-28 08:20:31.515963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.345 [2024-11-28 08:20:31.565345] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:34.345 [2024-11-28 08:20:31.565400] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:34.345 [2024-11-28 08:20:31.565409] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:34.345 [2024-11-28 08:20:31.565416] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:34.345 [2024-11-28 08:20:31.565422] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:34.345 [2024-11-28 08:20:31.566197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.290 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:35.290 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:35.290 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:35.290 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:35.290 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.290 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:35.290 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:21:35.290 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.290 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.290 [2024-11-28 08:20:32.286060] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.290 malloc0 00:21:35.290 [2024-11-28 08:20:32.316200] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:35.290 [2024-11-28 08:20:32.316547] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:35.290 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.290 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1995686 00:21:35.290 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1995686 /var/tmp/bdevperf.sock 00:21:35.290 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:35.290 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1995686 ']' 00:21:35.290 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:35.290 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:35.290 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:35.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:35.290 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:35.290 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.290 [2024-11-28 08:20:32.408157] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:21:35.290 [2024-11-28 08:20:32.408233] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1995686 ] 00:21:35.290 [2024-11-28 08:20:32.499215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.290 [2024-11-28 08:20:32.533687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.232 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:36.232 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:36.232 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.C9ubOQ6oap 00:21:36.232 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:36.493 [2024-11-28 08:20:33.536894] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:36.493 nvme0n1 00:21:36.493 08:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:36.493 Running I/O for 1 seconds... 00:21:37.697 4830.00 IOPS, 18.87 MiB/s 00:21:37.697 Latency(us) 00:21:37.697 [2024-11-28T07:20:34.986Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.697 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:37.697 Verification LBA range: start 0x0 length 0x2000 00:21:37.697 nvme0n1 : 1.05 4741.92 18.52 0.00 0.00 26480.89 4997.12 44346.03 00:21:37.697 [2024-11-28T07:20:34.986Z] =================================================================================================================== 00:21:37.697 [2024-11-28T07:20:34.986Z] Total : 4741.92 18.52 0.00 0.00 26480.89 4997.12 44346.03 00:21:37.697 { 00:21:37.697 "results": [ 00:21:37.697 { 00:21:37.697 "job": "nvme0n1", 00:21:37.697 "core_mask": "0x2", 00:21:37.697 "workload": "verify", 00:21:37.697 "status": "finished", 00:21:37.697 "verify_range": { 00:21:37.697 "start": 0, 00:21:37.697 "length": 8192 00:21:37.697 }, 00:21:37.697 "queue_depth": 128, 00:21:37.697 "io_size": 4096, 00:21:37.697 "runtime": 1.045779, 00:21:37.697 "iops": 4741.919659889901, 00:21:37.697 "mibps": 18.523123671444925, 00:21:37.697 "io_failed": 0, 00:21:37.697 "io_timeout": 0, 00:21:37.697 "avg_latency_us": 26480.88916851516, 00:21:37.697 "min_latency_us": 4997.12, 00:21:37.697 "max_latency_us": 44346.026666666665 00:21:37.697 } 00:21:37.697 ], 00:21:37.697 "core_count": 1 00:21:37.697 } 00:21:37.697 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:21:37.697 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.697 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:37.697 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.697 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:21:37.697 "subsystems": [ 00:21:37.697 { 00:21:37.697 "subsystem": "keyring", 00:21:37.697 "config": [ 00:21:37.697 { 00:21:37.697 "method": "keyring_file_add_key", 00:21:37.697 "params": { 00:21:37.697 "name": "key0", 00:21:37.697 "path": "/tmp/tmp.C9ubOQ6oap" 00:21:37.697 } 00:21:37.697 } 00:21:37.697 ] 00:21:37.697 }, 00:21:37.697 { 00:21:37.697 "subsystem": "iobuf", 00:21:37.697 "config": [ 00:21:37.697 { 00:21:37.697 "method": "iobuf_set_options", 00:21:37.697 "params": { 00:21:37.697 "small_pool_count": 8192, 00:21:37.697 "large_pool_count": 1024, 00:21:37.697 "small_bufsize": 8192, 00:21:37.697 "large_bufsize": 135168, 00:21:37.697 "enable_numa": false 00:21:37.697 } 00:21:37.697 } 00:21:37.697 ] 00:21:37.697 }, 00:21:37.697 { 00:21:37.697 "subsystem": "sock", 00:21:37.697 "config": [ 00:21:37.697 { 00:21:37.697 "method": "sock_set_default_impl", 00:21:37.697 "params": { 00:21:37.697 "impl_name": "posix" 00:21:37.697 } 00:21:37.697 }, 00:21:37.697 { 00:21:37.697 "method": "sock_impl_set_options", 00:21:37.697 "params": { 00:21:37.697 "impl_name": "ssl", 00:21:37.697 "recv_buf_size": 4096, 00:21:37.697 "send_buf_size": 4096, 00:21:37.697 "enable_recv_pipe": true, 00:21:37.697 "enable_quickack": false, 00:21:37.697 "enable_placement_id": 0, 00:21:37.697 "enable_zerocopy_send_server": true, 00:21:37.697 "enable_zerocopy_send_client": false, 00:21:37.697 "zerocopy_threshold": 0, 00:21:37.697 "tls_version": 0, 00:21:37.697 "enable_ktls": false 00:21:37.697 } 00:21:37.697 }, 00:21:37.697 { 00:21:37.697 "method": "sock_impl_set_options", 00:21:37.697 "params": { 00:21:37.697 "impl_name": "posix", 00:21:37.697 "recv_buf_size": 2097152, 00:21:37.697 "send_buf_size": 2097152, 00:21:37.697 "enable_recv_pipe": true, 00:21:37.697 "enable_quickack": false, 00:21:37.697 "enable_placement_id": 0, 00:21:37.697 "enable_zerocopy_send_server": true, 00:21:37.697 "enable_zerocopy_send_client": false, 00:21:37.697 "zerocopy_threshold": 0, 00:21:37.697 "tls_version": 0, 00:21:37.697 "enable_ktls": false 00:21:37.697 } 00:21:37.697 } 00:21:37.697 ] 00:21:37.697 }, 00:21:37.697 { 00:21:37.697 "subsystem": "vmd", 00:21:37.697 "config": [] 00:21:37.697 }, 00:21:37.697 { 00:21:37.697 "subsystem": "accel", 00:21:37.697 "config": [ 00:21:37.697 { 00:21:37.697 "method": "accel_set_options", 00:21:37.697 "params": { 00:21:37.697 "small_cache_size": 128, 00:21:37.697 "large_cache_size": 16, 00:21:37.697 "task_count": 2048, 00:21:37.697 "sequence_count": 2048, 00:21:37.697 "buf_count": 2048 00:21:37.697 } 00:21:37.697 } 00:21:37.697 ] 00:21:37.697 }, 00:21:37.697 { 00:21:37.697 "subsystem": "bdev", 00:21:37.697 "config": [ 00:21:37.697 { 00:21:37.697 "method": "bdev_set_options", 00:21:37.697 "params": { 00:21:37.697 "bdev_io_pool_size": 65535, 00:21:37.697 "bdev_io_cache_size": 256, 00:21:37.697 "bdev_auto_examine": true, 00:21:37.697 "iobuf_small_cache_size": 128, 00:21:37.697 "iobuf_large_cache_size": 16 00:21:37.697 } 00:21:37.697 }, 00:21:37.697 { 00:21:37.697 "method": "bdev_raid_set_options", 00:21:37.697 "params": { 00:21:37.697 "process_window_size_kb": 1024, 00:21:37.697 "process_max_bandwidth_mb_sec": 0 00:21:37.697 } 00:21:37.697 }, 00:21:37.697 { 00:21:37.697 "method": "bdev_iscsi_set_options", 00:21:37.697 "params": { 00:21:37.697 "timeout_sec": 30 00:21:37.697 } 00:21:37.697 }, 00:21:37.697 { 00:21:37.697 "method": "bdev_nvme_set_options", 00:21:37.697 "params": { 00:21:37.697 "action_on_timeout": "none", 00:21:37.698 "timeout_us": 0, 00:21:37.698 "timeout_admin_us": 0, 00:21:37.698 "keep_alive_timeout_ms": 10000, 00:21:37.698 "arbitration_burst": 0, 00:21:37.698 "low_priority_weight": 0, 00:21:37.698 "medium_priority_weight": 0, 00:21:37.698 "high_priority_weight": 0, 00:21:37.698 "nvme_adminq_poll_period_us": 10000, 00:21:37.698 "nvme_ioq_poll_period_us": 0, 00:21:37.698 "io_queue_requests": 0, 00:21:37.698 "delay_cmd_submit": true, 00:21:37.698 "transport_retry_count": 4, 00:21:37.698 "bdev_retry_count": 3, 00:21:37.698 "transport_ack_timeout": 0, 00:21:37.698 "ctrlr_loss_timeout_sec": 0, 00:21:37.698 "reconnect_delay_sec": 0, 00:21:37.698 "fast_io_fail_timeout_sec": 0, 00:21:37.698 "disable_auto_failback": false, 00:21:37.698 "generate_uuids": false, 00:21:37.698 "transport_tos": 0, 00:21:37.698 "nvme_error_stat": false, 00:21:37.698 "rdma_srq_size": 0, 00:21:37.698 "io_path_stat": false, 00:21:37.698 "allow_accel_sequence": false, 00:21:37.698 "rdma_max_cq_size": 0, 00:21:37.698 "rdma_cm_event_timeout_ms": 0, 00:21:37.698 "dhchap_digests": [ 00:21:37.698 "sha256", 00:21:37.698 "sha384", 00:21:37.698 "sha512" 00:21:37.698 ], 00:21:37.698 "dhchap_dhgroups": [ 00:21:37.698 "null", 00:21:37.698 "ffdhe2048", 00:21:37.698 "ffdhe3072", 00:21:37.698 "ffdhe4096", 00:21:37.698 "ffdhe6144", 00:21:37.698 "ffdhe8192" 00:21:37.698 ] 00:21:37.698 } 00:21:37.698 }, 00:21:37.698 { 00:21:37.698 "method": "bdev_nvme_set_hotplug", 00:21:37.698 "params": { 00:21:37.698 "period_us": 100000, 00:21:37.698 "enable": false 00:21:37.698 } 00:21:37.698 }, 00:21:37.698 { 00:21:37.698 "method": "bdev_malloc_create", 00:21:37.698 "params": { 00:21:37.698 "name": "malloc0", 00:21:37.698 "num_blocks": 8192, 00:21:37.698 "block_size": 4096, 00:21:37.698 "physical_block_size": 4096, 00:21:37.698 "uuid": "eb35fc24-49d3-40aa-8601-c6f79a0262c9", 00:21:37.698 "optimal_io_boundary": 0, 00:21:37.698 "md_size": 0, 00:21:37.698 "dif_type": 0, 00:21:37.698 "dif_is_head_of_md": false, 00:21:37.698 "dif_pi_format": 0 00:21:37.698 } 00:21:37.698 }, 00:21:37.698 { 00:21:37.698 "method": "bdev_wait_for_examine" 00:21:37.698 } 00:21:37.698 ] 00:21:37.698 }, 00:21:37.698 { 00:21:37.698 "subsystem": "nbd", 00:21:37.698 "config": [] 00:21:37.698 }, 00:21:37.698 { 00:21:37.698 "subsystem": "scheduler", 00:21:37.698 "config": [ 00:21:37.698 { 00:21:37.698 "method": "framework_set_scheduler", 00:21:37.698 "params": { 00:21:37.698 "name": "static" 00:21:37.698 } 00:21:37.698 } 00:21:37.698 ] 00:21:37.698 }, 00:21:37.698 { 00:21:37.698 "subsystem": "nvmf", 00:21:37.698 "config": [ 00:21:37.698 { 00:21:37.698 "method": "nvmf_set_config", 00:21:37.698 "params": { 00:21:37.698 "discovery_filter": "match_any", 00:21:37.698 "admin_cmd_passthru": { 00:21:37.698 "identify_ctrlr": false 00:21:37.698 }, 00:21:37.698 "dhchap_digests": [ 00:21:37.698 "sha256", 00:21:37.698 "sha384", 00:21:37.698 "sha512" 00:21:37.698 ], 00:21:37.698 "dhchap_dhgroups": [ 00:21:37.698 "null", 00:21:37.698 "ffdhe2048", 00:21:37.698 "ffdhe3072", 00:21:37.698 "ffdhe4096", 00:21:37.698 "ffdhe6144", 00:21:37.698 "ffdhe8192" 00:21:37.698 ] 00:21:37.698 } 00:21:37.698 }, 00:21:37.698 { 00:21:37.698 "method": "nvmf_set_max_subsystems", 00:21:37.698 "params": { 00:21:37.698 "max_subsystems": 1024 00:21:37.698 } 00:21:37.698 }, 00:21:37.698 { 00:21:37.698 "method": "nvmf_set_crdt", 00:21:37.698 "params": { 00:21:37.698 "crdt1": 0, 00:21:37.698 "crdt2": 0, 00:21:37.698 "crdt3": 0 00:21:37.698 } 00:21:37.698 }, 00:21:37.698 { 00:21:37.698 "method": "nvmf_create_transport", 00:21:37.698 "params": { 00:21:37.698 "trtype": "TCP", 00:21:37.698 "max_queue_depth": 128, 00:21:37.698 "max_io_qpairs_per_ctrlr": 127, 00:21:37.698 "in_capsule_data_size": 4096, 00:21:37.698 "max_io_size": 131072, 00:21:37.698 "io_unit_size": 131072, 00:21:37.698 "max_aq_depth": 128, 00:21:37.698 "num_shared_buffers": 511, 00:21:37.698 "buf_cache_size": 4294967295, 00:21:37.698 "dif_insert_or_strip": false, 00:21:37.698 "zcopy": false, 00:21:37.698 "c2h_success": false, 00:21:37.698 "sock_priority": 0, 00:21:37.698 "abort_timeout_sec": 1, 00:21:37.698 "ack_timeout": 0, 00:21:37.698 "data_wr_pool_size": 0 00:21:37.698 } 00:21:37.698 }, 00:21:37.698 { 00:21:37.698 "method": "nvmf_create_subsystem", 00:21:37.698 "params": { 00:21:37.698 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.698 "allow_any_host": false, 00:21:37.698 "serial_number": "00000000000000000000", 00:21:37.698 "model_number": "SPDK bdev Controller", 00:21:37.698 "max_namespaces": 32, 00:21:37.698 "min_cntlid": 1, 00:21:37.698 "max_cntlid": 65519, 00:21:37.698 "ana_reporting": false 00:21:37.698 } 00:21:37.698 }, 00:21:37.698 { 00:21:37.698 "method": "nvmf_subsystem_add_host", 00:21:37.698 "params": { 00:21:37.698 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.698 "host": "nqn.2016-06.io.spdk:host1", 00:21:37.698 "psk": "key0" 00:21:37.698 } 00:21:37.698 }, 00:21:37.698 { 00:21:37.698 "method": "nvmf_subsystem_add_ns", 00:21:37.698 "params": { 00:21:37.698 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.698 "namespace": { 00:21:37.698 "nsid": 1, 00:21:37.698 "bdev_name": "malloc0", 00:21:37.698 "nguid": "EB35FC2449D340AA8601C6F79A0262C9", 00:21:37.698 "uuid": "eb35fc24-49d3-40aa-8601-c6f79a0262c9", 00:21:37.698 "no_auto_visible": false 00:21:37.698 } 00:21:37.698 } 00:21:37.698 }, 00:21:37.698 { 00:21:37.698 "method": "nvmf_subsystem_add_listener", 00:21:37.698 "params": { 00:21:37.698 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.698 "listen_address": { 00:21:37.698 "trtype": "TCP", 00:21:37.698 "adrfam": "IPv4", 00:21:37.698 "traddr": "10.0.0.2", 00:21:37.698 "trsvcid": "4420" 00:21:37.698 }, 00:21:37.698 "secure_channel": false, 00:21:37.698 "sock_impl": "ssl" 00:21:37.698 } 00:21:37.698 } 00:21:37.698 ] 00:21:37.698 } 00:21:37.698 ] 00:21:37.698 }' 00:21:37.698 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:37.957 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:21:37.957 "subsystems": [ 00:21:37.957 { 00:21:37.957 "subsystem": "keyring", 00:21:37.957 "config": [ 00:21:37.957 { 00:21:37.957 "method": "keyring_file_add_key", 00:21:37.957 "params": { 00:21:37.957 "name": "key0", 00:21:37.957 "path": "/tmp/tmp.C9ubOQ6oap" 00:21:37.957 } 00:21:37.957 } 00:21:37.957 ] 00:21:37.957 }, 00:21:37.957 { 00:21:37.957 "subsystem": "iobuf", 00:21:37.957 "config": [ 00:21:37.957 { 00:21:37.957 "method": "iobuf_set_options", 00:21:37.957 "params": { 00:21:37.957 "small_pool_count": 8192, 00:21:37.957 "large_pool_count": 1024, 00:21:37.957 "small_bufsize": 8192, 00:21:37.957 "large_bufsize": 135168, 00:21:37.957 "enable_numa": false 00:21:37.957 } 00:21:37.957 } 00:21:37.957 ] 00:21:37.957 }, 00:21:37.957 { 00:21:37.957 "subsystem": "sock", 00:21:37.957 "config": [ 00:21:37.957 { 00:21:37.957 "method": "sock_set_default_impl", 00:21:37.957 "params": { 00:21:37.957 "impl_name": "posix" 00:21:37.957 } 00:21:37.957 }, 00:21:37.957 { 00:21:37.958 "method": "sock_impl_set_options", 00:21:37.958 "params": { 00:21:37.958 "impl_name": "ssl", 00:21:37.958 "recv_buf_size": 4096, 00:21:37.958 "send_buf_size": 4096, 00:21:37.958 "enable_recv_pipe": true, 00:21:37.958 "enable_quickack": false, 00:21:37.958 "enable_placement_id": 0, 00:21:37.958 "enable_zerocopy_send_server": true, 00:21:37.958 "enable_zerocopy_send_client": false, 00:21:37.958 "zerocopy_threshold": 0, 00:21:37.958 "tls_version": 0, 00:21:37.958 "enable_ktls": false 00:21:37.958 } 00:21:37.958 }, 00:21:37.958 { 00:21:37.958 "method": "sock_impl_set_options", 00:21:37.958 "params": { 00:21:37.958 "impl_name": "posix", 00:21:37.958 "recv_buf_size": 2097152, 00:21:37.958 "send_buf_size": 2097152, 00:21:37.958 "enable_recv_pipe": true, 00:21:37.958 "enable_quickack": false, 00:21:37.958 "enable_placement_id": 0, 00:21:37.958 "enable_zerocopy_send_server": true, 00:21:37.958 "enable_zerocopy_send_client": false, 00:21:37.958 "zerocopy_threshold": 0, 00:21:37.958 "tls_version": 0, 00:21:37.958 "enable_ktls": false 00:21:37.958 } 00:21:37.958 } 00:21:37.958 ] 00:21:37.958 }, 00:21:37.958 { 00:21:37.958 "subsystem": "vmd", 00:21:37.958 "config": [] 00:21:37.958 }, 00:21:37.958 { 00:21:37.958 "subsystem": "accel", 00:21:37.958 "config": [ 00:21:37.958 { 00:21:37.958 "method": "accel_set_options", 00:21:37.958 "params": { 00:21:37.958 "small_cache_size": 128, 00:21:37.958 "large_cache_size": 16, 00:21:37.958 "task_count": 2048, 00:21:37.958 "sequence_count": 2048, 00:21:37.958 "buf_count": 2048 00:21:37.958 } 00:21:37.958 } 00:21:37.958 ] 00:21:37.958 }, 00:21:37.958 { 00:21:37.958 "subsystem": "bdev", 00:21:37.958 "config": [ 00:21:37.958 { 00:21:37.958 "method": "bdev_set_options", 00:21:37.958 "params": { 00:21:37.958 "bdev_io_pool_size": 65535, 00:21:37.958 "bdev_io_cache_size": 256, 00:21:37.958 "bdev_auto_examine": true, 00:21:37.958 "iobuf_small_cache_size": 128, 00:21:37.958 "iobuf_large_cache_size": 16 00:21:37.958 } 00:21:37.958 }, 00:21:37.958 { 00:21:37.958 "method": "bdev_raid_set_options", 00:21:37.958 "params": { 00:21:37.958 "process_window_size_kb": 1024, 00:21:37.958 "process_max_bandwidth_mb_sec": 0 00:21:37.958 } 00:21:37.958 }, 00:21:37.958 { 00:21:37.958 "method": "bdev_iscsi_set_options", 00:21:37.958 "params": { 00:21:37.958 "timeout_sec": 30 00:21:37.958 } 00:21:37.958 }, 00:21:37.958 { 00:21:37.958 "method": "bdev_nvme_set_options", 00:21:37.958 "params": { 00:21:37.958 "action_on_timeout": "none", 00:21:37.958 "timeout_us": 0, 00:21:37.958 "timeout_admin_us": 0, 00:21:37.958 "keep_alive_timeout_ms": 10000, 00:21:37.958 "arbitration_burst": 0, 00:21:37.958 "low_priority_weight": 0, 00:21:37.958 "medium_priority_weight": 0, 00:21:37.958 "high_priority_weight": 0, 00:21:37.958 "nvme_adminq_poll_period_us": 10000, 00:21:37.958 "nvme_ioq_poll_period_us": 0, 00:21:37.958 "io_queue_requests": 512, 00:21:37.958 "delay_cmd_submit": true, 00:21:37.958 "transport_retry_count": 4, 00:21:37.958 "bdev_retry_count": 3, 00:21:37.958 "transport_ack_timeout": 0, 00:21:37.958 "ctrlr_loss_timeout_sec": 0, 00:21:37.958 "reconnect_delay_sec": 0, 00:21:37.958 "fast_io_fail_timeout_sec": 0, 00:21:37.958 "disable_auto_failback": false, 00:21:37.958 "generate_uuids": false, 00:21:37.958 "transport_tos": 0, 00:21:37.958 "nvme_error_stat": false, 00:21:37.958 "rdma_srq_size": 0, 00:21:37.958 "io_path_stat": false, 00:21:37.958 "allow_accel_sequence": false, 00:21:37.958 "rdma_max_cq_size": 0, 00:21:37.958 "rdma_cm_event_timeout_ms": 0, 00:21:37.958 "dhchap_digests": [ 00:21:37.958 "sha256", 00:21:37.958 "sha384", 00:21:37.958 "sha512" 00:21:37.958 ], 00:21:37.958 "dhchap_dhgroups": [ 00:21:37.958 "null", 00:21:37.958 "ffdhe2048", 00:21:37.958 "ffdhe3072", 00:21:37.958 "ffdhe4096", 00:21:37.958 "ffdhe6144", 00:21:37.958 "ffdhe8192" 00:21:37.958 ] 00:21:37.958 } 00:21:37.958 }, 00:21:37.958 { 00:21:37.958 "method": "bdev_nvme_attach_controller", 00:21:37.958 "params": { 00:21:37.958 "name": "nvme0", 00:21:37.958 "trtype": "TCP", 00:21:37.958 "adrfam": "IPv4", 00:21:37.958 "traddr": "10.0.0.2", 00:21:37.958 "trsvcid": "4420", 00:21:37.958 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.958 "prchk_reftag": false, 00:21:37.958 "prchk_guard": false, 00:21:37.958 "ctrlr_loss_timeout_sec": 0, 00:21:37.958 "reconnect_delay_sec": 0, 00:21:37.958 "fast_io_fail_timeout_sec": 0, 00:21:37.958 "psk": "key0", 00:21:37.958 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:37.958 "hdgst": false, 00:21:37.958 "ddgst": false, 00:21:37.958 "multipath": "multipath" 00:21:37.958 } 00:21:37.958 }, 00:21:37.958 { 00:21:37.958 "method": "bdev_nvme_set_hotplug", 00:21:37.958 "params": { 00:21:37.958 "period_us": 100000, 00:21:37.958 "enable": false 00:21:37.958 } 00:21:37.958 }, 00:21:37.958 { 00:21:37.958 "method": "bdev_enable_histogram", 00:21:37.958 "params": { 00:21:37.958 "name": "nvme0n1", 00:21:37.958 "enable": true 00:21:37.958 } 00:21:37.958 }, 00:21:37.958 { 00:21:37.958 "method": "bdev_wait_for_examine" 00:21:37.958 } 00:21:37.958 ] 00:21:37.958 }, 00:21:37.958 { 00:21:37.958 "subsystem": "nbd", 00:21:37.958 "config": [] 00:21:37.958 } 00:21:37.958 ] 00:21:37.958 }' 00:21:37.958 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1995686 00:21:37.958 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1995686 ']' 00:21:37.958 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1995686 00:21:37.958 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:37.958 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:37.958 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1995686 00:21:37.958 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:37.958 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:37.958 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1995686' 00:21:37.958 killing process with pid 1995686 00:21:37.958 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1995686 00:21:37.958 Received shutdown signal, test time was about 1.000000 seconds 00:21:37.958 00:21:37.958 Latency(us) 00:21:37.958 [2024-11-28T07:20:35.247Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.958 [2024-11-28T07:20:35.247Z] =================================================================================================================== 00:21:37.958 [2024-11-28T07:20:35.247Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:37.958 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1995686 00:21:38.218 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1995610 00:21:38.218 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1995610 ']' 00:21:38.218 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1995610 00:21:38.218 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:38.218 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:38.218 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1995610 00:21:38.218 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:38.218 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:38.218 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1995610' 00:21:38.218 killing process with pid 1995610 00:21:38.218 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1995610 00:21:38.218 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1995610 00:21:38.218 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:21:38.218 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:38.218 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:38.218 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:21:38.218 "subsystems": [ 00:21:38.218 { 00:21:38.218 "subsystem": "keyring", 00:21:38.218 "config": [ 00:21:38.218 { 00:21:38.218 "method": "keyring_file_add_key", 00:21:38.218 "params": { 00:21:38.218 "name": "key0", 00:21:38.218 "path": "/tmp/tmp.C9ubOQ6oap" 00:21:38.218 } 00:21:38.218 } 00:21:38.218 ] 00:21:38.218 }, 00:21:38.218 { 00:21:38.218 "subsystem": "iobuf", 00:21:38.218 "config": [ 00:21:38.218 { 00:21:38.218 "method": "iobuf_set_options", 00:21:38.218 "params": { 00:21:38.218 "small_pool_count": 8192, 00:21:38.218 "large_pool_count": 1024, 00:21:38.218 "small_bufsize": 8192, 00:21:38.218 "large_bufsize": 135168, 00:21:38.218 "enable_numa": false 00:21:38.218 } 00:21:38.218 } 00:21:38.218 ] 00:21:38.218 }, 00:21:38.218 { 00:21:38.218 "subsystem": "sock", 00:21:38.218 "config": [ 00:21:38.218 { 00:21:38.218 "method": "sock_set_default_impl", 00:21:38.218 "params": { 00:21:38.218 "impl_name": "posix" 00:21:38.218 } 00:21:38.218 }, 00:21:38.218 { 00:21:38.218 "method": "sock_impl_set_options", 00:21:38.218 "params": { 00:21:38.218 "impl_name": "ssl", 00:21:38.218 "recv_buf_size": 4096, 00:21:38.218 "send_buf_size": 4096, 00:21:38.218 "enable_recv_pipe": true, 00:21:38.218 "enable_quickack": false, 00:21:38.218 "enable_placement_id": 0, 00:21:38.218 "enable_zerocopy_send_server": true, 00:21:38.218 "enable_zerocopy_send_client": false, 00:21:38.218 "zerocopy_threshold": 0, 00:21:38.218 "tls_version": 0, 00:21:38.218 "enable_ktls": false 00:21:38.218 } 00:21:38.218 }, 00:21:38.218 { 00:21:38.218 "method": "sock_impl_set_options", 00:21:38.218 "params": { 00:21:38.218 "impl_name": "posix", 00:21:38.218 "recv_buf_size": 2097152, 00:21:38.218 "send_buf_size": 2097152, 00:21:38.218 "enable_recv_pipe": true, 00:21:38.218 "enable_quickack": false, 00:21:38.218 "enable_placement_id": 0, 00:21:38.218 "enable_zerocopy_send_server": true, 00:21:38.218 "enable_zerocopy_send_client": false, 00:21:38.218 "zerocopy_threshold": 0, 00:21:38.218 "tls_version": 0, 00:21:38.218 "enable_ktls": false 00:21:38.218 } 00:21:38.218 } 00:21:38.218 ] 00:21:38.218 }, 00:21:38.218 { 00:21:38.218 "subsystem": "vmd", 00:21:38.218 "config": [] 00:21:38.218 }, 00:21:38.218 { 00:21:38.218 "subsystem": "accel", 00:21:38.218 "config": [ 00:21:38.218 { 00:21:38.218 "method": "accel_set_options", 00:21:38.218 "params": { 00:21:38.218 "small_cache_size": 128, 00:21:38.218 "large_cache_size": 16, 00:21:38.218 "task_count": 2048, 00:21:38.218 "sequence_count": 2048, 00:21:38.218 "buf_count": 2048 00:21:38.218 } 00:21:38.218 } 00:21:38.218 ] 00:21:38.218 }, 00:21:38.218 { 00:21:38.218 "subsystem": "bdev", 00:21:38.218 "config": [ 00:21:38.218 { 00:21:38.218 "method": "bdev_set_options", 00:21:38.218 "params": { 00:21:38.218 "bdev_io_pool_size": 65535, 00:21:38.218 "bdev_io_cache_size": 256, 00:21:38.218 "bdev_auto_examine": true, 00:21:38.218 "iobuf_small_cache_size": 128, 00:21:38.218 "iobuf_large_cache_size": 16 00:21:38.218 } 00:21:38.218 }, 00:21:38.218 { 00:21:38.218 "method": "bdev_raid_set_options", 00:21:38.218 "params": { 00:21:38.218 "process_window_size_kb": 1024, 00:21:38.218 "process_max_bandwidth_mb_sec": 0 00:21:38.218 } 00:21:38.218 }, 00:21:38.218 { 00:21:38.218 "method": "bdev_iscsi_set_options", 00:21:38.218 "params": { 00:21:38.218 "timeout_sec": 30 00:21:38.218 } 00:21:38.218 }, 00:21:38.218 { 00:21:38.218 "method": "bdev_nvme_set_options", 00:21:38.218 "params": { 00:21:38.218 "action_on_timeout": "none", 00:21:38.218 "timeout_us": 0, 00:21:38.218 "timeout_admin_us": 0, 00:21:38.218 "keep_alive_timeout_ms": 10000, 00:21:38.218 "arbitration_burst": 0, 00:21:38.218 "low_priority_weight": 0, 00:21:38.218 "medium_priority_weight": 0, 00:21:38.218 "high_priority_weight": 0, 00:21:38.218 "nvme_adminq_poll_period_us": 10000, 00:21:38.218 "nvme_ioq_poll_period_us": 0, 00:21:38.218 "io_queue_requests": 0, 00:21:38.218 "delay_cmd_submit": true, 00:21:38.218 "transport_retry_count": 4, 00:21:38.218 "bdev_retry_count": 3, 00:21:38.218 "transport_ack_timeout": 0, 00:21:38.218 "ctrlr_loss_timeout_sec": 0, 00:21:38.218 "reconnect_delay_sec": 0, 00:21:38.218 "fast_io_fail_timeout_sec": 0, 00:21:38.218 "disable_auto_failback": false, 00:21:38.218 "generate_uuids": false, 00:21:38.218 "transport_tos": 0, 00:21:38.218 "nvme_error_stat": false, 00:21:38.218 "rdma_srq_size": 0, 00:21:38.218 "io_path_stat": false, 00:21:38.218 "allow_accel_sequence": false, 00:21:38.218 "rdma_max_cq_size": 0, 00:21:38.218 "rdma_cm_event_timeout_ms": 0, 00:21:38.218 "dhchap_digests": [ 00:21:38.218 "sha256", 00:21:38.218 "sha384", 00:21:38.218 "sha512" 00:21:38.218 ], 00:21:38.218 "dhchap_dhgroups": [ 00:21:38.218 "null", 00:21:38.218 "ffdhe2048", 00:21:38.218 "ffdhe3072", 00:21:38.218 "ffdhe4096", 00:21:38.218 "ffdhe6144", 00:21:38.218 "ffdhe8192" 00:21:38.218 ] 00:21:38.218 } 00:21:38.218 }, 00:21:38.218 { 00:21:38.218 "method": "bdev_nvme_set_hotplug", 00:21:38.218 "params": { 00:21:38.218 "period_us": 100000, 00:21:38.218 "enable": false 00:21:38.218 } 00:21:38.218 }, 00:21:38.218 { 00:21:38.218 "method": "bdev_malloc_create", 00:21:38.219 "params": { 00:21:38.219 "name": "malloc0", 00:21:38.219 "num_blocks": 8192, 00:21:38.219 "block_size": 4096, 00:21:38.219 "physical_block_size": 4096, 00:21:38.219 "uuid": "eb35fc24-49d3-40aa-8601-c6f79a0262c9", 00:21:38.219 "optimal_io_boundary": 0, 00:21:38.219 "md_size": 0, 00:21:38.219 "dif_type": 0, 00:21:38.219 "dif_is_head_of_md": false, 00:21:38.219 "dif_pi_format": 0 00:21:38.219 } 00:21:38.219 }, 00:21:38.219 { 00:21:38.219 "method": "bdev_wait_for_examine" 00:21:38.219 } 00:21:38.219 ] 00:21:38.219 }, 00:21:38.219 { 00:21:38.219 "subsystem": "nbd", 00:21:38.219 "config": [] 00:21:38.219 }, 00:21:38.219 { 00:21:38.219 "subsystem": "scheduler", 00:21:38.219 "config": [ 00:21:38.219 { 00:21:38.219 "method": "framework_set_scheduler", 00:21:38.219 "params": { 00:21:38.219 "name": "static" 00:21:38.219 } 00:21:38.219 } 00:21:38.219 ] 00:21:38.219 }, 00:21:38.219 { 00:21:38.219 "subsystem": "nvmf", 00:21:38.219 "config": [ 00:21:38.219 { 00:21:38.219 "method": "nvmf_set_config", 00:21:38.219 "params": { 00:21:38.219 "discovery_filter": "match_any", 00:21:38.219 "admin_cmd_passthru": { 00:21:38.219 "identify_ctrlr": false 00:21:38.219 }, 00:21:38.219 "dhchap_digests": [ 00:21:38.219 "sha256", 00:21:38.219 "sha384", 00:21:38.219 "sha512" 00:21:38.219 ], 00:21:38.219 "dhchap_dhgroups": [ 00:21:38.219 "null", 00:21:38.219 "ffdhe2048", 00:21:38.219 "ffdhe3072", 00:21:38.219 "ffdhe4096", 00:21:38.219 "ffdhe6144", 00:21:38.219 "ffdhe8192" 00:21:38.219 ] 00:21:38.219 } 00:21:38.219 }, 00:21:38.219 { 00:21:38.219 "method": "nvmf_set_max_subsystems", 00:21:38.219 "params": { 00:21:38.219 "max_subsystems": 1024 00:21:38.219 } 00:21:38.219 }, 00:21:38.219 { 00:21:38.219 "method": "nvmf_set_crdt", 00:21:38.219 "params": { 00:21:38.219 "crdt1": 0, 00:21:38.219 "crdt2": 0, 00:21:38.219 "crdt3": 0 00:21:38.219 } 00:21:38.219 }, 00:21:38.219 { 00:21:38.219 "method": "nvmf_create_transport", 00:21:38.219 "params": { 00:21:38.219 "trtype": "TCP", 00:21:38.219 "max_queue_depth": 128, 00:21:38.219 "max_io_qpairs_per_ctrlr": 127, 00:21:38.219 "in_capsule_data_size": 4096, 00:21:38.219 "max_io_size": 131072, 00:21:38.219 "io_unit_size": 131072, 00:21:38.219 "max_aq_depth": 128, 00:21:38.219 "num_shared_buffers": 511, 00:21:38.219 "buf_cache_size": 4294967295, 00:21:38.219 "dif_insert_or_strip": false, 00:21:38.219 "zcopy": false, 00:21:38.219 "c2h_success": false, 00:21:38.219 "sock_priority": 0, 00:21:38.219 "abort_timeout_sec": 1, 00:21:38.219 "ack_timeout": 0, 00:21:38.219 "data_wr_pool_size": 0 00:21:38.219 } 00:21:38.219 }, 00:21:38.219 { 00:21:38.219 "method": "nvmf_create_subsystem", 00:21:38.219 "params": { 00:21:38.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.219 "allow_any_host": false, 00:21:38.219 "serial_number": "00000000000000000000", 00:21:38.219 "model_number": "SPDK bdev Controller", 00:21:38.219 "max_namespaces": 32, 00:21:38.219 "min_cntlid": 1, 00:21:38.219 "max_cntlid": 65519, 00:21:38.219 "ana_reporting": false 00:21:38.219 } 00:21:38.219 }, 00:21:38.219 { 00:21:38.219 "method": "nvmf_subsystem_add_host", 00:21:38.219 "params": { 00:21:38.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.219 "host": "nqn.2016-06.io.spdk:host1", 00:21:38.219 "psk": "key0" 00:21:38.219 } 00:21:38.219 }, 00:21:38.219 { 00:21:38.219 "method": "nvmf_subsystem_add_ns", 00:21:38.219 "params": { 00:21:38.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.219 "namespace": { 00:21:38.219 "nsid": 1, 00:21:38.219 "bdev_name": "malloc0", 00:21:38.219 "nguid": "EB35FC2449D340AA8601C6F79A0262C9", 00:21:38.219 "uuid": "eb35fc24-49d3-40aa-8601-c6f79a0262c9", 00:21:38.219 "no_auto_visible": false 00:21:38.219 } 00:21:38.219 } 00:21:38.219 }, 00:21:38.219 { 00:21:38.219 "method": "nvmf_subsystem_add_listener", 00:21:38.219 "params": { 00:21:38.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.219 "listen_address": { 00:21:38.219 "trtype": "TCP", 00:21:38.219 "adrfam": "IPv4", 00:21:38.219 "traddr": "10.0.0.2", 00:21:38.219 "trsvcid": "4420" 00:21:38.219 }, 00:21:38.219 "secure_channel": false, 00:21:38.219 "sock_impl": "ssl" 00:21:38.219 } 00:21:38.219 } 00:21:38.219 ] 00:21:38.219 } 00:21:38.219 ] 00:21:38.219 }' 00:21:38.219 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.479 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1996367 00:21:38.479 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1996367 00:21:38.479 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:38.479 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1996367 ']' 00:21:38.479 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.479 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:38.479 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.479 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:38.479 08:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.479 [2024-11-28 08:20:35.561074] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:21:38.479 [2024-11-28 08:20:35.561132] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:38.479 [2024-11-28 08:20:35.654383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.479 [2024-11-28 08:20:35.683668] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.479 [2024-11-28 08:20:35.683697] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.479 [2024-11-28 08:20:35.683703] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:38.479 [2024-11-28 08:20:35.683708] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:38.479 [2024-11-28 08:20:35.683712] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.479 [2024-11-28 08:20:35.684179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.740 [2024-11-28 08:20:35.878294] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:38.740 [2024-11-28 08:20:35.910326] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:38.740 [2024-11-28 08:20:35.910526] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:39.311 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:39.311 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:39.311 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:39.311 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:39.311 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:39.311 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:39.311 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1996593 00:21:39.311 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1996593 /var/tmp/bdevperf.sock 00:21:39.311 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1996593 ']' 00:21:39.311 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:39.311 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:39.311 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:39.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:39.311 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:39.311 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:39.311 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:39.311 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:21:39.311 "subsystems": [ 00:21:39.311 { 00:21:39.311 "subsystem": "keyring", 00:21:39.311 "config": [ 00:21:39.311 { 00:21:39.311 "method": "keyring_file_add_key", 00:21:39.311 "params": { 00:21:39.311 "name": "key0", 00:21:39.311 "path": "/tmp/tmp.C9ubOQ6oap" 00:21:39.311 } 00:21:39.311 } 00:21:39.311 ] 00:21:39.311 }, 00:21:39.311 { 00:21:39.311 "subsystem": "iobuf", 00:21:39.311 "config": [ 00:21:39.311 { 00:21:39.311 "method": "iobuf_set_options", 00:21:39.311 "params": { 00:21:39.311 "small_pool_count": 8192, 00:21:39.311 "large_pool_count": 1024, 00:21:39.311 "small_bufsize": 8192, 00:21:39.311 "large_bufsize": 135168, 00:21:39.311 "enable_numa": false 00:21:39.311 } 00:21:39.311 } 00:21:39.311 ] 00:21:39.311 }, 00:21:39.311 { 00:21:39.311 "subsystem": "sock", 00:21:39.311 "config": [ 00:21:39.311 { 00:21:39.311 "method": "sock_set_default_impl", 00:21:39.311 "params": { 00:21:39.311 "impl_name": "posix" 00:21:39.311 } 00:21:39.311 }, 00:21:39.311 { 00:21:39.311 "method": "sock_impl_set_options", 00:21:39.311 "params": { 00:21:39.311 "impl_name": "ssl", 00:21:39.311 "recv_buf_size": 4096, 00:21:39.311 "send_buf_size": 4096, 00:21:39.311 "enable_recv_pipe": true, 00:21:39.311 "enable_quickack": false, 00:21:39.311 "enable_placement_id": 0, 00:21:39.311 "enable_zerocopy_send_server": true, 00:21:39.311 "enable_zerocopy_send_client": false, 00:21:39.311 "zerocopy_threshold": 0, 00:21:39.311 "tls_version": 0, 00:21:39.311 "enable_ktls": false 00:21:39.311 } 00:21:39.311 }, 00:21:39.311 { 00:21:39.311 "method": "sock_impl_set_options", 00:21:39.311 "params": { 00:21:39.311 "impl_name": "posix", 00:21:39.311 "recv_buf_size": 2097152, 00:21:39.311 "send_buf_size": 2097152, 00:21:39.311 "enable_recv_pipe": true, 00:21:39.311 "enable_quickack": false, 00:21:39.311 "enable_placement_id": 0, 00:21:39.312 "enable_zerocopy_send_server": true, 00:21:39.312 "enable_zerocopy_send_client": false, 00:21:39.312 "zerocopy_threshold": 0, 00:21:39.312 "tls_version": 0, 00:21:39.312 "enable_ktls": false 00:21:39.312 } 00:21:39.312 } 00:21:39.312 ] 00:21:39.312 }, 00:21:39.312 { 00:21:39.312 "subsystem": "vmd", 00:21:39.312 "config": [] 00:21:39.312 }, 00:21:39.312 { 00:21:39.312 "subsystem": "accel", 00:21:39.312 "config": [ 00:21:39.312 { 00:21:39.312 "method": "accel_set_options", 00:21:39.312 "params": { 00:21:39.312 "small_cache_size": 128, 00:21:39.312 "large_cache_size": 16, 00:21:39.312 "task_count": 2048, 00:21:39.312 "sequence_count": 2048, 00:21:39.312 "buf_count": 2048 00:21:39.312 } 00:21:39.312 } 00:21:39.312 ] 00:21:39.312 }, 00:21:39.312 { 00:21:39.312 "subsystem": "bdev", 00:21:39.312 "config": [ 00:21:39.312 { 00:21:39.312 "method": "bdev_set_options", 00:21:39.312 "params": { 00:21:39.312 "bdev_io_pool_size": 65535, 00:21:39.312 "bdev_io_cache_size": 256, 00:21:39.312 "bdev_auto_examine": true, 00:21:39.312 "iobuf_small_cache_size": 128, 00:21:39.312 "iobuf_large_cache_size": 16 00:21:39.312 } 00:21:39.312 }, 00:21:39.312 { 00:21:39.312 "method": "bdev_raid_set_options", 00:21:39.312 "params": { 00:21:39.312 "process_window_size_kb": 1024, 00:21:39.312 "process_max_bandwidth_mb_sec": 0 00:21:39.312 } 00:21:39.312 }, 00:21:39.312 { 00:21:39.312 "method": "bdev_iscsi_set_options", 00:21:39.312 "params": { 00:21:39.312 "timeout_sec": 30 00:21:39.312 } 00:21:39.312 }, 00:21:39.312 { 00:21:39.312 "method": "bdev_nvme_set_options", 00:21:39.312 "params": { 00:21:39.312 "action_on_timeout": "none", 00:21:39.312 "timeout_us": 0, 00:21:39.312 "timeout_admin_us": 0, 00:21:39.312 "keep_alive_timeout_ms": 10000, 00:21:39.312 "arbitration_burst": 0, 00:21:39.312 "low_priority_weight": 0, 00:21:39.312 "medium_priority_weight": 0, 00:21:39.312 "high_priority_weight": 0, 00:21:39.312 "nvme_adminq_poll_period_us": 10000, 00:21:39.312 "nvme_ioq_poll_period_us": 0, 00:21:39.312 "io_queue_requests": 512, 00:21:39.312 "delay_cmd_submit": true, 00:21:39.312 "transport_retry_count": 4, 00:21:39.312 "bdev_retry_count": 3, 00:21:39.312 "transport_ack_timeout": 0, 00:21:39.312 "ctrlr_loss_timeout_sec": 0, 00:21:39.312 "reconnect_delay_sec": 0, 00:21:39.312 "fast_io_fail_timeout_sec": 0, 00:21:39.312 "disable_auto_failback": false, 00:21:39.312 "generate_uuids": false, 00:21:39.312 "transport_tos": 0, 00:21:39.312 "nvme_error_stat": false, 00:21:39.312 "rdma_srq_size": 0, 00:21:39.312 "io_path_stat": false, 00:21:39.312 "allow_accel_sequence": false, 00:21:39.312 "rdma_max_cq_size": 0, 00:21:39.312 "rdma_cm_event_timeout_ms": 0, 00:21:39.312 "dhchap_digests": [ 00:21:39.312 "sha256", 00:21:39.312 "sha384", 00:21:39.312 "sha512" 00:21:39.312 ], 00:21:39.312 "dhchap_dhgroups": [ 00:21:39.312 "null", 00:21:39.312 "ffdhe2048", 00:21:39.312 "ffdhe3072", 00:21:39.312 "ffdhe4096", 00:21:39.312 "ffdhe6144", 00:21:39.312 "ffdhe8192" 00:21:39.312 ] 00:21:39.312 } 00:21:39.312 }, 00:21:39.312 { 00:21:39.312 "method": "bdev_nvme_attach_controller", 00:21:39.312 "params": { 00:21:39.312 "name": "nvme0", 00:21:39.312 "trtype": "TCP", 00:21:39.312 "adrfam": "IPv4", 00:21:39.312 "traddr": "10.0.0.2", 00:21:39.312 "trsvcid": "4420", 00:21:39.312 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.312 "prchk_reftag": false, 00:21:39.312 "prchk_guard": false, 00:21:39.312 "ctrlr_loss_timeout_sec": 0, 00:21:39.312 "reconnect_delay_sec": 0, 00:21:39.312 "fast_io_fail_timeout_sec": 0, 00:21:39.312 "psk": "key0", 00:21:39.312 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:39.312 "hdgst": false, 00:21:39.312 "ddgst": false, 00:21:39.312 "multipath": "multipath" 00:21:39.312 } 00:21:39.312 }, 00:21:39.312 { 00:21:39.312 "method": "bdev_nvme_set_hotplug", 00:21:39.312 "params": { 00:21:39.312 "period_us": 100000, 00:21:39.312 "enable": false 00:21:39.312 } 00:21:39.312 }, 00:21:39.312 { 00:21:39.312 "method": "bdev_enable_histogram", 00:21:39.312 "params": { 00:21:39.312 "name": "nvme0n1", 00:21:39.312 "enable": true 00:21:39.312 } 00:21:39.312 }, 00:21:39.312 { 00:21:39.312 "method": "bdev_wait_for_examine" 00:21:39.312 } 00:21:39.312 ] 00:21:39.312 }, 00:21:39.312 { 00:21:39.312 "subsystem": "nbd", 00:21:39.312 "config": [] 00:21:39.312 } 00:21:39.312 ] 00:21:39.312 }' 00:21:39.312 [2024-11-28 08:20:36.447036] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:21:39.312 [2024-11-28 08:20:36.447124] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1996593 ] 00:21:39.312 [2024-11-28 08:20:36.531898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.312 [2024-11-28 08:20:36.562409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.573 [2024-11-28 08:20:36.698307] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:40.144 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:40.144 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:40.144 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:40.144 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:21:40.144 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.144 08:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:40.404 Running I/O for 1 seconds... 00:21:41.348 4582.00 IOPS, 17.90 MiB/s 00:21:41.348 Latency(us) 00:21:41.348 [2024-11-28T07:20:38.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.348 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:41.348 Verification LBA range: start 0x0 length 0x2000 00:21:41.348 nvme0n1 : 1.05 4489.38 17.54 0.00 0.00 27942.53 5379.41 79080.11 00:21:41.348 [2024-11-28T07:20:38.637Z] =================================================================================================================== 00:21:41.348 [2024-11-28T07:20:38.637Z] Total : 4489.38 17.54 0.00 0.00 27942.53 5379.41 79080.11 00:21:41.348 { 00:21:41.348 "results": [ 00:21:41.348 { 00:21:41.348 "job": "nvme0n1", 00:21:41.348 "core_mask": "0x2", 00:21:41.348 "workload": "verify", 00:21:41.348 "status": "finished", 00:21:41.348 "verify_range": { 00:21:41.348 "start": 0, 00:21:41.348 "length": 8192 00:21:41.348 }, 00:21:41.348 "queue_depth": 128, 00:21:41.348 "io_size": 4096, 00:21:41.348 "runtime": 1.049143, 00:21:41.348 "iops": 4489.378473668508, 00:21:41.348 "mibps": 17.53663466276761, 00:21:41.348 "io_failed": 0, 00:21:41.348 "io_timeout": 0, 00:21:41.348 "avg_latency_us": 27942.532257607927, 00:21:41.348 "min_latency_us": 5379.413333333333, 00:21:41.348 "max_latency_us": 79080.10666666667 00:21:41.348 } 00:21:41.348 ], 00:21:41.348 "core_count": 1 00:21:41.348 } 00:21:41.348 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:21:41.348 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:21:41.348 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:41.348 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:21:41.348 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:21:41.348 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:41.348 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:41.348 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:41.348 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:41.348 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:41.348 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:41.348 nvmf_trace.0 00:21:41.609 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:21:41.609 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1996593 00:21:41.609 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1996593 ']' 00:21:41.609 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1996593 00:21:41.609 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:41.609 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:41.609 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1996593 00:21:41.609 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:41.609 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:41.609 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1996593' 00:21:41.609 killing process with pid 1996593 00:21:41.609 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1996593 00:21:41.609 Received shutdown signal, test time was about 1.000000 seconds 00:21:41.609 00:21:41.609 Latency(us) 00:21:41.609 [2024-11-28T07:20:38.898Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.609 [2024-11-28T07:20:38.898Z] =================================================================================================================== 00:21:41.609 [2024-11-28T07:20:38.898Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:41.609 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1996593 00:21:41.609 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:41.609 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:41.609 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:21:41.609 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:41.610 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:21:41.610 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:41.610 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:41.610 rmmod nvme_tcp 00:21:41.610 rmmod nvme_fabrics 00:21:41.610 rmmod nvme_keyring 00:21:41.871 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:41.871 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:21:41.871 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:21:41.871 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1996367 ']' 00:21:41.871 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1996367 00:21:41.871 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1996367 ']' 00:21:41.871 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1996367 00:21:41.871 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:41.872 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:41.872 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1996367 00:21:41.872 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:41.872 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:41.872 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1996367' 00:21:41.872 killing process with pid 1996367 00:21:41.872 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1996367 00:21:41.872 08:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1996367 00:21:41.872 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:41.872 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:41.872 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:41.872 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:21:41.872 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:21:41.872 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:41.872 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:21:41.872 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:41.872 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:41.872 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.872 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:41.872 08:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.425 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:44.425 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.gJmiaQvymR /tmp/tmp.KWlEBXywlp /tmp/tmp.C9ubOQ6oap 00:21:44.425 00:21:44.425 real 1m27.098s 00:21:44.425 user 2m17.310s 00:21:44.425 sys 0m26.968s 00:21:44.425 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:44.425 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.425 ************************************ 00:21:44.425 END TEST nvmf_tls 00:21:44.425 ************************************ 00:21:44.425 08:20:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:44.425 08:20:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:44.426 ************************************ 00:21:44.426 START TEST nvmf_fips 00:21:44.426 ************************************ 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:44.426 * Looking for test storage... 00:21:44.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:44.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.426 --rc genhtml_branch_coverage=1 00:21:44.426 --rc genhtml_function_coverage=1 00:21:44.426 --rc genhtml_legend=1 00:21:44.426 --rc geninfo_all_blocks=1 00:21:44.426 --rc geninfo_unexecuted_blocks=1 00:21:44.426 00:21:44.426 ' 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:44.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.426 --rc genhtml_branch_coverage=1 00:21:44.426 --rc genhtml_function_coverage=1 00:21:44.426 --rc genhtml_legend=1 00:21:44.426 --rc geninfo_all_blocks=1 00:21:44.426 --rc geninfo_unexecuted_blocks=1 00:21:44.426 00:21:44.426 ' 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:44.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.426 --rc genhtml_branch_coverage=1 00:21:44.426 --rc genhtml_function_coverage=1 00:21:44.426 --rc genhtml_legend=1 00:21:44.426 --rc geninfo_all_blocks=1 00:21:44.426 --rc geninfo_unexecuted_blocks=1 00:21:44.426 00:21:44.426 ' 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:44.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.426 --rc genhtml_branch_coverage=1 00:21:44.426 --rc genhtml_function_coverage=1 00:21:44.426 --rc genhtml_legend=1 00:21:44.426 --rc geninfo_all_blocks=1 00:21:44.426 --rc geninfo_unexecuted_blocks=1 00:21:44.426 00:21:44.426 ' 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:44.426 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:44.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:21:44.427 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:21:44.428 Error setting digest 00:21:44.428 40727EEE447F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:21:44.428 40727EEE447F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:21:44.428 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:52.577 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:52.577 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:21:52.577 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:52.577 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:52.577 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:52.577 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:52.577 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:52.578 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:52.578 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:52.578 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:52.578 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:52.578 08:20:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:52.578 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:52.578 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:52.578 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:52.578 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:52.578 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:52.578 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:52.578 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:52.578 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:52.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:52.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:21:52.578 00:21:52.578 --- 10.0.0.2 ping statistics --- 00:21:52.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.578 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:21:52.578 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:52.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:52.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:21:52.578 00:21:52.578 --- 10.0.0.1 ping statistics --- 00:21:52.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:52.578 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:21:52.578 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:52.578 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:21:52.578 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:52.578 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:52.578 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:52.578 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:52.578 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:52.578 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:52.578 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:52.578 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:52.578 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:52.579 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:52.579 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:52.579 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2001418 00:21:52.579 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2001418 00:21:52.579 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:52.579 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2001418 ']' 00:21:52.579 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.579 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:52.579 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.579 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:52.579 08:20:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:52.579 [2024-11-28 08:20:49.315712] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:21:52.579 [2024-11-28 08:20:49.315786] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.579 [2024-11-28 08:20:49.415802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.579 [2024-11-28 08:20:49.465455] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:52.579 [2024-11-28 08:20:49.465507] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:52.579 [2024-11-28 08:20:49.465516] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:52.579 [2024-11-28 08:20:49.465523] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:52.579 [2024-11-28 08:20:49.465530] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:52.579 [2024-11-28 08:20:49.466308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.840 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:52.840 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:52.840 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:52.840 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:52.840 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:53.102 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:53.102 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:53.102 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:53.102 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:21:53.102 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.FjO 00:21:53.102 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:53.102 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.FjO 00:21:53.102 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.FjO 00:21:53.102 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.FjO 00:21:53.102 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:53.102 [2024-11-28 08:20:50.334051] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:53.102 [2024-11-28 08:20:50.350035] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:53.102 [2024-11-28 08:20:50.350380] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:53.364 malloc0 00:21:53.364 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:53.364 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2001522 00:21:53.364 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2001522 /var/tmp/bdevperf.sock 00:21:53.364 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:53.364 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2001522 ']' 00:21:53.364 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:53.364 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:53.364 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:53.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:53.365 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:53.365 08:20:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:53.365 [2024-11-28 08:20:50.492759] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:21:53.365 [2024-11-28 08:20:50.492839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2001522 ] 00:21:53.365 [2024-11-28 08:20:50.585692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.365 [2024-11-28 08:20:50.636885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:54.308 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:54.308 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:54.309 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.FjO 00:21:54.309 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:54.568 [2024-11-28 08:20:51.640554] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:54.568 TLSTESTn1 00:21:54.568 08:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:54.568 Running I/O for 10 seconds... 00:21:56.899 3722.00 IOPS, 14.54 MiB/s [2024-11-28T07:20:55.131Z] 4636.00 IOPS, 18.11 MiB/s [2024-11-28T07:20:56.071Z] 4866.33 IOPS, 19.01 MiB/s [2024-11-28T07:20:57.013Z] 5129.75 IOPS, 20.04 MiB/s [2024-11-28T07:20:57.955Z] 5261.40 IOPS, 20.55 MiB/s [2024-11-28T07:20:58.898Z] 5263.00 IOPS, 20.56 MiB/s [2024-11-28T07:20:59.901Z] 5267.57 IOPS, 20.58 MiB/s [2024-11-28T07:21:01.286Z] 5375.88 IOPS, 21.00 MiB/s [2024-11-28T07:21:02.229Z] 5373.11 IOPS, 20.99 MiB/s [2024-11-28T07:21:02.229Z] 5378.60 IOPS, 21.01 MiB/s 00:22:04.940 Latency(us) 00:22:04.940 [2024-11-28T07:21:02.229Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.940 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:04.940 Verification LBA range: start 0x0 length 0x2000 00:22:04.940 TLSTESTn1 : 10.02 5383.26 21.03 0.00 0.00 23742.42 5106.35 34297.17 00:22:04.940 [2024-11-28T07:21:02.229Z] =================================================================================================================== 00:22:04.940 [2024-11-28T07:21:02.229Z] Total : 5383.26 21.03 0.00 0.00 23742.42 5106.35 34297.17 00:22:04.940 { 00:22:04.940 "results": [ 00:22:04.940 { 00:22:04.940 "job": "TLSTESTn1", 00:22:04.940 "core_mask": "0x4", 00:22:04.940 "workload": "verify", 00:22:04.940 "status": "finished", 00:22:04.940 "verify_range": { 00:22:04.940 "start": 0, 00:22:04.940 "length": 8192 00:22:04.940 }, 00:22:04.940 "queue_depth": 128, 00:22:04.940 "io_size": 4096, 00:22:04.940 "runtime": 10.01512, 00:22:04.940 "iops": 5383.260510108716, 00:22:04.940 "mibps": 21.02836136761217, 00:22:04.940 "io_failed": 0, 00:22:04.940 "io_timeout": 0, 00:22:04.940 "avg_latency_us": 23742.41783383413, 00:22:04.940 "min_latency_us": 5106.346666666666, 00:22:04.940 "max_latency_us": 34297.17333333333 00:22:04.940 } 00:22:04.940 ], 00:22:04.940 "core_count": 1 00:22:04.940 } 00:22:04.940 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:04.940 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:04.940 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:22:04.940 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:22:04.940 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:22:04.940 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:04.940 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:22:04.940 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:22:04.940 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:22:04.940 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:04.940 nvmf_trace.0 00:22:04.940 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:22:04.940 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2001522 00:22:04.940 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2001522 ']' 00:22:04.940 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2001522 00:22:04.940 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:22:04.940 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:04.940 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2001522 00:22:04.940 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:04.940 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:04.940 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2001522' 00:22:04.940 killing process with pid 2001522 00:22:04.940 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2001522 00:22:04.940 Received shutdown signal, test time was about 10.000000 seconds 00:22:04.940 00:22:04.940 Latency(us) 00:22:04.940 [2024-11-28T07:21:02.229Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.940 [2024-11-28T07:21:02.229Z] =================================================================================================================== 00:22:04.940 [2024-11-28T07:21:02.229Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:04.940 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2001522 00:22:04.940 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:04.940 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:04.940 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:22:04.940 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:04.940 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:22:04.940 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:04.940 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:04.940 rmmod nvme_tcp 00:22:04.940 rmmod nvme_fabrics 00:22:04.940 rmmod nvme_keyring 00:22:05.201 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:05.201 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:22:05.201 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:22:05.201 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2001418 ']' 00:22:05.201 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2001418 00:22:05.201 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2001418 ']' 00:22:05.201 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2001418 00:22:05.201 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:22:05.201 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:05.201 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2001418 00:22:05.201 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:05.201 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:05.201 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2001418' 00:22:05.201 killing process with pid 2001418 00:22:05.201 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2001418 00:22:05.201 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2001418 00:22:05.201 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:05.201 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:05.201 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:05.201 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:22:05.201 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:22:05.201 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:05.201 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:22:05.201 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:05.201 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:05.201 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.201 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:05.201 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.FjO 00:22:07.744 00:22:07.744 real 0m23.248s 00:22:07.744 user 0m24.921s 00:22:07.744 sys 0m9.649s 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:07.744 ************************************ 00:22:07.744 END TEST nvmf_fips 00:22:07.744 ************************************ 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:07.744 ************************************ 00:22:07.744 START TEST nvmf_control_msg_list 00:22:07.744 ************************************ 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:07.744 * Looking for test storage... 00:22:07.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:22:07.744 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:07.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.745 --rc genhtml_branch_coverage=1 00:22:07.745 --rc genhtml_function_coverage=1 00:22:07.745 --rc genhtml_legend=1 00:22:07.745 --rc geninfo_all_blocks=1 00:22:07.745 --rc geninfo_unexecuted_blocks=1 00:22:07.745 00:22:07.745 ' 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:07.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.745 --rc genhtml_branch_coverage=1 00:22:07.745 --rc genhtml_function_coverage=1 00:22:07.745 --rc genhtml_legend=1 00:22:07.745 --rc geninfo_all_blocks=1 00:22:07.745 --rc geninfo_unexecuted_blocks=1 00:22:07.745 00:22:07.745 ' 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:07.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.745 --rc genhtml_branch_coverage=1 00:22:07.745 --rc genhtml_function_coverage=1 00:22:07.745 --rc genhtml_legend=1 00:22:07.745 --rc geninfo_all_blocks=1 00:22:07.745 --rc geninfo_unexecuted_blocks=1 00:22:07.745 00:22:07.745 ' 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:07.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.745 --rc genhtml_branch_coverage=1 00:22:07.745 --rc genhtml_function_coverage=1 00:22:07.745 --rc genhtml_legend=1 00:22:07.745 --rc geninfo_all_blocks=1 00:22:07.745 --rc geninfo_unexecuted_blocks=1 00:22:07.745 00:22:07.745 ' 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:07.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:22:07.745 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:15.905 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:15.905 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:22:15.905 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:15.905 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:15.905 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:15.905 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:15.905 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:15.905 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:22:15.905 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:15.905 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:22:15.905 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:22:15.905 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:22:15.905 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:22:15.905 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:22:15.905 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:22:15.905 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:15.905 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:15.905 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:15.905 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:15.905 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:15.906 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:15.906 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:15.906 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:15.906 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:15.906 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:15.906 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:15.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.725 ms 00:22:15.907 00:22:15.907 --- 10.0.0.2 ping statistics --- 00:22:15.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.907 rtt min/avg/max/mdev = 0.725/0.725/0.725/0.000 ms 00:22:15.907 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:15.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:15.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:22:15.907 00:22:15.907 --- 10.0.0.1 ping statistics --- 00:22:15.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.907 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:22:15.907 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:15.907 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:22:15.907 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:15.907 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:15.907 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:15.907 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:15.907 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:15.907 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:15.907 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:15.907 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:22:15.907 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:15.907 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:15.907 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:15.907 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2008689 00:22:15.907 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2008689 00:22:15.907 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:15.907 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2008689 ']' 00:22:15.907 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.907 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:15.907 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.907 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:15.907 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:15.907 [2024-11-28 08:21:12.465789] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:22:15.907 [2024-11-28 08:21:12.465858] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:15.907 [2024-11-28 08:21:12.564899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.907 [2024-11-28 08:21:12.614945] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:15.907 [2024-11-28 08:21:12.614996] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:15.907 [2024-11-28 08:21:12.615005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:15.907 [2024-11-28 08:21:12.615012] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:15.907 [2024-11-28 08:21:12.615019] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:15.907 [2024-11-28 08:21:12.615764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.170 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:16.170 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:22:16.170 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:16.170 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:16.170 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:16.170 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:16.170 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:16.170 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:16.170 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:22:16.170 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.170 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:16.170 [2024-11-28 08:21:13.339274] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:16.170 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.170 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:22:16.170 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.170 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:16.170 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.170 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:16.170 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.170 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:16.170 Malloc0 00:22:16.170 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.170 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:16.170 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.170 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:16.170 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.170 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:16.170 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.170 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:16.170 [2024-11-28 08:21:13.393861] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:16.170 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.170 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2008736 00:22:16.170 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:16.170 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2008737 00:22:16.170 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:16.170 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2008738 00:22:16.170 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2008736 00:22:16.170 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:16.431 [2024-11-28 08:21:13.494801] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:16.431 [2024-11-28 08:21:13.495050] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:16.431 [2024-11-28 08:21:13.495391] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:17.374 Initializing NVMe Controllers 00:22:17.374 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:17.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:22:17.374 Initialization complete. Launching workers. 00:22:17.374 ======================================================== 00:22:17.374 Latency(us) 00:22:17.374 Device Information : IOPS MiB/s Average min max 00:22:17.374 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40929.64 40813.42 41560.84 00:22:17.374 ======================================================== 00:22:17.374 Total : 25.00 0.10 40929.64 40813.42 41560.84 00:22:17.374 00:22:17.374 Initializing NVMe Controllers 00:22:17.374 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:17.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:22:17.374 Initialization complete. Launching workers. 00:22:17.374 ======================================================== 00:22:17.374 Latency(us) 00:22:17.374 Device Information : IOPS MiB/s Average min max 00:22:17.374 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40927.83 40816.29 41518.23 00:22:17.374 ======================================================== 00:22:17.374 Total : 25.00 0.10 40927.83 40816.29 41518.23 00:22:17.374 00:22:17.636 Initializing NVMe Controllers 00:22:17.636 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:17.636 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:22:17.636 Initialization complete. Launching workers. 00:22:17.636 ======================================================== 00:22:17.636 Latency(us) 00:22:17.636 Device Information : IOPS MiB/s Average min max 00:22:17.636 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 1496.00 5.84 668.20 299.23 1061.66 00:22:17.636 ======================================================== 00:22:17.636 Total : 1496.00 5.84 668.20 299.23 1061.66 00:22:17.636 00:22:17.636 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2008737 00:22:17.636 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2008738 00:22:17.636 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:17.636 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:22:17.636 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:17.636 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:22:17.636 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:17.636 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:22:17.636 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:17.636 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:17.636 rmmod nvme_tcp 00:22:17.636 rmmod nvme_fabrics 00:22:17.636 rmmod nvme_keyring 00:22:17.636 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:17.636 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:22:17.636 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:22:17.636 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2008689 ']' 00:22:17.636 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2008689 00:22:17.636 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2008689 ']' 00:22:17.636 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2008689 00:22:17.636 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:22:17.636 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:17.636 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2008689 00:22:17.636 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:17.636 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:17.636 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2008689' 00:22:17.636 killing process with pid 2008689 00:22:17.636 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2008689 00:22:17.636 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2008689 00:22:17.898 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:17.898 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:17.898 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:17.898 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:22:17.898 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:22:17.898 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:22:17.898 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:17.898 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:17.898 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:17.898 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.898 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.898 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.815 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:19.815 00:22:19.815 real 0m12.499s 00:22:19.815 user 0m8.038s 00:22:19.815 sys 0m6.585s 00:22:19.815 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:19.815 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:19.815 ************************************ 00:22:19.815 END TEST nvmf_control_msg_list 00:22:19.815 ************************************ 00:22:20.077 08:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:20.077 08:21:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:20.077 08:21:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:20.077 08:21:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:20.077 ************************************ 00:22:20.077 START TEST nvmf_wait_for_buf 00:22:20.077 ************************************ 00:22:20.077 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:20.077 * Looking for test storage... 00:22:20.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:20.077 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:20.077 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:22:20.077 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:20.340 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:20.340 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:20.340 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:20.340 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:20.340 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:22:20.340 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:22:20.340 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:22:20.340 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:22:20.340 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:22:20.340 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:22:20.340 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:22:20.340 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:20.340 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:22:20.340 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:22:20.340 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:20.340 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:20.340 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:22:20.340 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:22:20.340 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:20.340 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:22:20.340 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:20.340 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:22:20.340 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:22:20.340 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:20.340 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:22:20.340 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:20.340 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:20.340 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:20.340 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:22:20.340 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:20.340 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:20.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.340 --rc genhtml_branch_coverage=1 00:22:20.340 --rc genhtml_function_coverage=1 00:22:20.340 --rc genhtml_legend=1 00:22:20.340 --rc geninfo_all_blocks=1 00:22:20.340 --rc geninfo_unexecuted_blocks=1 00:22:20.340 00:22:20.340 ' 00:22:20.340 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:20.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.340 --rc genhtml_branch_coverage=1 00:22:20.340 --rc genhtml_function_coverage=1 00:22:20.340 --rc genhtml_legend=1 00:22:20.340 --rc geninfo_all_blocks=1 00:22:20.340 --rc geninfo_unexecuted_blocks=1 00:22:20.340 00:22:20.340 ' 00:22:20.340 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:20.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.340 --rc genhtml_branch_coverage=1 00:22:20.340 --rc genhtml_function_coverage=1 00:22:20.340 --rc genhtml_legend=1 00:22:20.340 --rc geninfo_all_blocks=1 00:22:20.340 --rc geninfo_unexecuted_blocks=1 00:22:20.340 00:22:20.340 ' 00:22:20.340 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:20.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.340 --rc genhtml_branch_coverage=1 00:22:20.340 --rc genhtml_function_coverage=1 00:22:20.340 --rc genhtml_legend=1 00:22:20.340 --rc geninfo_all_blocks=1 00:22:20.340 --rc geninfo_unexecuted_blocks=1 00:22:20.340 00:22:20.340 ' 00:22:20.340 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:20.340 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:20.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:20.341 08:21:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:28.487 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:28.487 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:28.487 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:28.487 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:28.487 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:28.487 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:28.487 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:28.487 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:22:28.487 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:28.487 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:22:28.487 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:22:28.487 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:22:28.487 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:22:28.487 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:22:28.487 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:28.487 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:28.487 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:28.488 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:28.488 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:28.488 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:28.488 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:28.488 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:28.489 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:28.489 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:28.489 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:28.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:28.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:22:28.489 00:22:28.489 --- 10.0.0.2 ping statistics --- 00:22:28.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.489 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:22:28.489 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:28.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:28.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:22:28.489 00:22:28.489 --- 10.0.0.1 ping statistics --- 00:22:28.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.489 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:22:28.489 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:28.489 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:22:28.489 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:28.489 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:28.489 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:28.489 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:28.489 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:28.489 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:28.489 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:28.489 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:22:28.489 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:28.489 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:28.489 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:28.489 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2013390 00:22:28.489 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2013390 00:22:28.489 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:28.489 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2013390 ']' 00:22:28.489 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.489 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:28.489 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.489 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:28.489 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:28.489 [2024-11-28 08:21:25.060520] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:22:28.489 [2024-11-28 08:21:25.060589] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.489 [2024-11-28 08:21:25.132751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.489 [2024-11-28 08:21:25.178760] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.489 [2024-11-28 08:21:25.178811] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.489 [2024-11-28 08:21:25.178818] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:28.489 [2024-11-28 08:21:25.178823] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:28.489 [2024-11-28 08:21:25.178828] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.489 [2024-11-28 08:21:25.179519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.489 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:28.489 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:22:28.489 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:28.489 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:28.489 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:28.489 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.489 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:28.489 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:28.489 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:22:28.489 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.489 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:28.489 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.489 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:22:28.489 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.489 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:28.489 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.489 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:22:28.489 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.489 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:28.489 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.489 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:28.489 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.489 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:28.489 Malloc0 00:22:28.489 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.489 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:22:28.489 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.489 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:28.489 [2024-11-28 08:21:25.403640] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.489 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.489 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:22:28.489 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.489 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:28.489 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.490 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:28.490 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.490 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:28.490 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.490 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:28.490 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.490 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:28.490 [2024-11-28 08:21:25.439962] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.490 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.490 08:21:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:28.490 [2024-11-28 08:21:25.530266] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:29.882 Initializing NVMe Controllers 00:22:29.882 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:29.882 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:22:29.882 Initialization complete. Launching workers. 00:22:29.882 ======================================================== 00:22:29.882 Latency(us) 00:22:29.882 Device Information : IOPS MiB/s Average min max 00:22:29.882 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 119.00 14.88 34975.69 30929.02 109312.34 00:22:29.882 ======================================================== 00:22:29.882 Total : 119.00 14.88 34975.69 30929.02 109312.34 00:22:29.882 00:22:29.882 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:22:29.882 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:22:29.882 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.882 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:29.882 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.882 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1878 00:22:29.882 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1878 -eq 0 ]] 00:22:29.882 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:29.882 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:22:29.882 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:29.882 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:22:29.882 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:29.882 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:22:29.882 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:29.882 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:29.882 rmmod nvme_tcp 00:22:29.882 rmmod nvme_fabrics 00:22:29.882 rmmod nvme_keyring 00:22:29.882 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:29.882 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:22:29.882 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:22:29.882 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2013390 ']' 00:22:29.882 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2013390 00:22:29.882 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2013390 ']' 00:22:29.882 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2013390 00:22:29.882 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:22:29.882 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:29.882 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2013390 00:22:29.882 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:29.882 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:29.882 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2013390' 00:22:29.882 killing process with pid 2013390 00:22:29.882 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2013390 00:22:29.882 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2013390 00:22:30.143 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:30.143 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:30.143 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:30.143 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:22:30.143 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:22:30.143 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:30.143 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:22:30.143 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:30.143 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:30.143 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.143 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:30.143 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.692 08:21:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:32.692 00:22:32.692 real 0m12.198s 00:22:32.692 user 0m4.495s 00:22:32.692 sys 0m6.171s 00:22:32.692 08:21:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:32.692 08:21:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:32.692 ************************************ 00:22:32.692 END TEST nvmf_wait_for_buf 00:22:32.692 ************************************ 00:22:32.692 08:21:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:22:32.692 08:21:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:22:32.692 08:21:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:22:32.692 08:21:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:22:32.692 08:21:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:22:32.692 08:21:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:40.835 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:40.835 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:40.835 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:40.835 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:40.835 ************************************ 00:22:40.835 START TEST nvmf_perf_adq 00:22:40.835 ************************************ 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:40.835 * Looking for test storage... 00:22:40.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:40.835 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:40.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:40.836 --rc genhtml_branch_coverage=1 00:22:40.836 --rc genhtml_function_coverage=1 00:22:40.836 --rc genhtml_legend=1 00:22:40.836 --rc geninfo_all_blocks=1 00:22:40.836 --rc geninfo_unexecuted_blocks=1 00:22:40.836 00:22:40.836 ' 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:40.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:40.836 --rc genhtml_branch_coverage=1 00:22:40.836 --rc genhtml_function_coverage=1 00:22:40.836 --rc genhtml_legend=1 00:22:40.836 --rc geninfo_all_blocks=1 00:22:40.836 --rc geninfo_unexecuted_blocks=1 00:22:40.836 00:22:40.836 ' 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:40.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:40.836 --rc genhtml_branch_coverage=1 00:22:40.836 --rc genhtml_function_coverage=1 00:22:40.836 --rc genhtml_legend=1 00:22:40.836 --rc geninfo_all_blocks=1 00:22:40.836 --rc geninfo_unexecuted_blocks=1 00:22:40.836 00:22:40.836 ' 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:40.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:40.836 --rc genhtml_branch_coverage=1 00:22:40.836 --rc genhtml_function_coverage=1 00:22:40.836 --rc genhtml_legend=1 00:22:40.836 --rc geninfo_all_blocks=1 00:22:40.836 --rc geninfo_unexecuted_blocks=1 00:22:40.836 00:22:40.836 ' 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:40.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:40.836 08:21:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:47.425 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:47.425 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.425 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:47.425 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:47.426 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.426 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:47.426 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:47.426 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:47.426 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:47.426 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:47.426 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:47.426 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:47.426 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:47.426 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:47.426 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:47.426 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:47.426 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:47.426 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:47.426 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:47.426 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:22:47.426 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:47.426 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:48.368 08:21:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:50.915 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:56.208 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:22:56.208 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:56.208 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:56.208 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:56.208 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:56.208 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:56.208 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.208 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:56.208 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.208 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:56.208 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:56.208 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:56.208 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:56.208 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:56.208 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:56.208 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:56.208 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:56.208 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:56.208 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:56.208 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:56.208 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:56.208 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:56.209 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:56.209 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:56.209 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:56.209 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:56.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:56.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:22:56.209 00:22:56.209 --- 10.0.0.2 ping statistics --- 00:22:56.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.209 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:56.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:56.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:22:56.209 00:22:56.209 --- 10.0.0.1 ping statistics --- 00:22:56.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.209 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:56.209 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:56.210 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:56.210 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:56.210 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2023299 00:22:56.210 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2023299 00:22:56.210 08:21:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:56.210 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2023299 ']' 00:22:56.210 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:56.210 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:56.210 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:56.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:56.210 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:56.210 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:56.210 [2024-11-28 08:21:53.057135] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:22:56.210 [2024-11-28 08:21:53.057209] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:56.210 [2024-11-28 08:21:53.158597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:56.210 [2024-11-28 08:21:53.214306] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:56.210 [2024-11-28 08:21:53.214362] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:56.210 [2024-11-28 08:21:53.214376] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:56.210 [2024-11-28 08:21:53.214382] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:56.210 [2024-11-28 08:21:53.214388] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:56.210 [2024-11-28 08:21:53.216513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.210 [2024-11-28 08:21:53.216673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.210 [2024-11-28 08:21:53.216720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.210 [2024-11-28 08:21:53.216721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:56.784 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:56.784 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:56.784 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:56.784 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:56.784 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:56.784 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:56.784 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:22:56.784 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:56.784 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:56.784 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.784 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:56.784 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.784 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:56.784 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:56.784 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.784 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:56.784 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.784 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:56.784 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.784 08:21:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:57.045 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.045 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:57.046 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.046 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:57.046 [2024-11-28 08:21:54.090327] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:57.046 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.046 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:57.046 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.046 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:57.046 Malloc1 00:22:57.046 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.046 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:57.046 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.046 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:57.046 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.046 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:57.046 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.046 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:57.046 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.046 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:57.046 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.046 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:57.046 [2024-11-28 08:21:54.166875] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.046 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.046 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2023655 00:22:57.046 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:22:57.046 08:21:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:59.135 08:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:22:59.135 08:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.135 08:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:59.135 08:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.135 08:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:22:59.135 "tick_rate": 2400000000, 00:22:59.135 "poll_groups": [ 00:22:59.135 { 00:22:59.135 "name": "nvmf_tgt_poll_group_000", 00:22:59.135 "admin_qpairs": 1, 00:22:59.135 "io_qpairs": 1, 00:22:59.135 "current_admin_qpairs": 1, 00:22:59.135 "current_io_qpairs": 1, 00:22:59.135 "pending_bdev_io": 0, 00:22:59.135 "completed_nvme_io": 18536, 00:22:59.135 "transports": [ 00:22:59.135 { 00:22:59.135 "trtype": "TCP" 00:22:59.135 } 00:22:59.135 ] 00:22:59.135 }, 00:22:59.135 { 00:22:59.135 "name": "nvmf_tgt_poll_group_001", 00:22:59.135 "admin_qpairs": 0, 00:22:59.135 "io_qpairs": 1, 00:22:59.135 "current_admin_qpairs": 0, 00:22:59.135 "current_io_qpairs": 1, 00:22:59.135 "pending_bdev_io": 0, 00:22:59.135 "completed_nvme_io": 19630, 00:22:59.135 "transports": [ 00:22:59.135 { 00:22:59.135 "trtype": "TCP" 00:22:59.135 } 00:22:59.135 ] 00:22:59.135 }, 00:22:59.135 { 00:22:59.135 "name": "nvmf_tgt_poll_group_002", 00:22:59.135 "admin_qpairs": 0, 00:22:59.135 "io_qpairs": 1, 00:22:59.135 "current_admin_qpairs": 0, 00:22:59.135 "current_io_qpairs": 1, 00:22:59.135 "pending_bdev_io": 0, 00:22:59.135 "completed_nvme_io": 19046, 00:22:59.135 "transports": [ 00:22:59.135 { 00:22:59.135 "trtype": "TCP" 00:22:59.135 } 00:22:59.135 ] 00:22:59.135 }, 00:22:59.135 { 00:22:59.135 "name": "nvmf_tgt_poll_group_003", 00:22:59.135 "admin_qpairs": 0, 00:22:59.135 "io_qpairs": 1, 00:22:59.135 "current_admin_qpairs": 0, 00:22:59.135 "current_io_qpairs": 1, 00:22:59.135 "pending_bdev_io": 0, 00:22:59.135 "completed_nvme_io": 17392, 00:22:59.135 "transports": [ 00:22:59.135 { 00:22:59.135 "trtype": "TCP" 00:22:59.135 } 00:22:59.135 ] 00:22:59.135 } 00:22:59.135 ] 00:22:59.135 }' 00:22:59.135 08:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:59.135 08:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:22:59.135 08:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:22:59.136 08:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:22:59.136 08:21:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2023655 00:23:07.281 Initializing NVMe Controllers 00:23:07.281 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:07.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:07.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:07.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:07.281 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:07.281 Initialization complete. Launching workers. 00:23:07.281 ======================================================== 00:23:07.281 Latency(us) 00:23:07.281 Device Information : IOPS MiB/s Average min max 00:23:07.281 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12487.20 48.78 5139.44 1326.46 44098.63 00:23:07.281 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13483.40 52.67 4746.76 1344.66 12906.01 00:23:07.281 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13466.00 52.60 4761.31 1195.37 45612.21 00:23:07.281 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13247.30 51.75 4831.01 1218.29 12679.81 00:23:07.281 ======================================================== 00:23:07.281 Total : 52683.90 205.80 4864.74 1195.37 45612.21 00:23:07.281 00:23:07.281 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:23:07.281 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:07.281 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:23:07.281 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:07.281 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:23:07.281 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:07.281 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:07.281 rmmod nvme_tcp 00:23:07.281 rmmod nvme_fabrics 00:23:07.281 rmmod nvme_keyring 00:23:07.281 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:07.281 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:23:07.281 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:23:07.281 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2023299 ']' 00:23:07.281 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2023299 00:23:07.281 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2023299 ']' 00:23:07.281 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2023299 00:23:07.281 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:23:07.281 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:07.281 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2023299 00:23:07.281 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:07.281 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:07.281 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2023299' 00:23:07.281 killing process with pid 2023299 00:23:07.281 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2023299 00:23:07.281 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2023299 00:23:07.541 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:07.541 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:07.541 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:07.541 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:23:07.541 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:23:07.541 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:07.541 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:23:07.541 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:07.541 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:07.541 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.541 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:07.541 08:22:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.483 08:22:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:09.484 08:22:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:23:09.484 08:22:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:23:09.484 08:22:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:23:11.394 08:22:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:23:13.306 08:22:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:23:18.594 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:23:18.594 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:18.594 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:18.594 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:18.594 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:18.594 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:18.594 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.594 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:18.594 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.594 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:18.594 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:18.594 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:18.594 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:18.594 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:18.594 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:18.594 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:18.594 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:18.594 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:18.594 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:18.594 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:18.594 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:18.594 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:18.594 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:18.594 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:18.595 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:18.595 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:18.595 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:18.595 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:18.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:18.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:23:18.595 00:23:18.595 --- 10.0.0.2 ping statistics --- 00:23:18.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.595 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:18.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:18.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:23:18.595 00:23:18.595 --- 10.0.0.1 ping statistics --- 00:23:18.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.595 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:18.595 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:18.596 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:18.596 net.core.busy_poll = 1 00:23:18.596 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:18.596 net.core.busy_read = 1 00:23:18.596 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:18.596 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:18.596 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:18.596 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:18.596 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:18.857 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:18.857 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:18.857 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:18.857 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:18.857 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2028135 00:23:18.857 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2028135 00:23:18.857 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:18.857 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2028135 ']' 00:23:18.857 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:18.857 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:18.857 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:18.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:18.857 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:18.857 08:22:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:18.857 [2024-11-28 08:22:15.974904] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:23:18.857 [2024-11-28 08:22:15.974964] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:18.857 [2024-11-28 08:22:16.072391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:18.857 [2024-11-28 08:22:16.126108] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:18.857 [2024-11-28 08:22:16.126171] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:18.857 [2024-11-28 08:22:16.126180] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:18.857 [2024-11-28 08:22:16.126188] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:18.857 [2024-11-28 08:22:16.126194] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:18.857 [2024-11-28 08:22:16.128553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:18.857 [2024-11-28 08:22:16.128686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.857 [2024-11-28 08:22:16.128850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:18.857 [2024-11-28 08:22:16.128851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.802 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:19.802 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:23:19.802 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:19.802 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:19.802 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:19.802 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:19.802 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:23:19.802 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:19.802 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:19.802 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.802 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:19.802 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.802 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:19.802 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:19.802 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.802 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:19.802 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.802 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:19.802 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.802 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:19.802 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.802 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:19.802 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.802 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:19.802 [2024-11-28 08:22:16.955656] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:19.802 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.802 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:19.802 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.802 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:19.802 Malloc1 00:23:19.802 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.802 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:19.802 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.802 08:22:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:19.802 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.802 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:19.802 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.802 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:19.802 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.802 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:19.802 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.802 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:19.802 [2024-11-28 08:22:17.025542] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:19.802 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.802 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2028488 00:23:19.802 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:23:19.802 08:22:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:22.373 08:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:23:22.373 08:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.373 08:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:22.373 08:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.373 08:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:23:22.373 "tick_rate": 2400000000, 00:23:22.373 "poll_groups": [ 00:23:22.373 { 00:23:22.373 "name": "nvmf_tgt_poll_group_000", 00:23:22.373 "admin_qpairs": 1, 00:23:22.373 "io_qpairs": 2, 00:23:22.373 "current_admin_qpairs": 1, 00:23:22.373 "current_io_qpairs": 2, 00:23:22.373 "pending_bdev_io": 0, 00:23:22.373 "completed_nvme_io": 27079, 00:23:22.373 "transports": [ 00:23:22.373 { 00:23:22.373 "trtype": "TCP" 00:23:22.373 } 00:23:22.373 ] 00:23:22.373 }, 00:23:22.373 { 00:23:22.373 "name": "nvmf_tgt_poll_group_001", 00:23:22.373 "admin_qpairs": 0, 00:23:22.373 "io_qpairs": 2, 00:23:22.373 "current_admin_qpairs": 0, 00:23:22.373 "current_io_qpairs": 2, 00:23:22.373 "pending_bdev_io": 0, 00:23:22.373 "completed_nvme_io": 27361, 00:23:22.373 "transports": [ 00:23:22.373 { 00:23:22.373 "trtype": "TCP" 00:23:22.373 } 00:23:22.373 ] 00:23:22.373 }, 00:23:22.373 { 00:23:22.373 "name": "nvmf_tgt_poll_group_002", 00:23:22.373 "admin_qpairs": 0, 00:23:22.373 "io_qpairs": 0, 00:23:22.374 "current_admin_qpairs": 0, 00:23:22.374 "current_io_qpairs": 0, 00:23:22.374 "pending_bdev_io": 0, 00:23:22.374 "completed_nvme_io": 0, 00:23:22.374 "transports": [ 00:23:22.374 { 00:23:22.374 "trtype": "TCP" 00:23:22.374 } 00:23:22.374 ] 00:23:22.374 }, 00:23:22.374 { 00:23:22.374 "name": "nvmf_tgt_poll_group_003", 00:23:22.374 "admin_qpairs": 0, 00:23:22.374 "io_qpairs": 0, 00:23:22.374 "current_admin_qpairs": 0, 00:23:22.374 "current_io_qpairs": 0, 00:23:22.374 "pending_bdev_io": 0, 00:23:22.374 "completed_nvme_io": 0, 00:23:22.374 "transports": [ 00:23:22.374 { 00:23:22.374 "trtype": "TCP" 00:23:22.374 } 00:23:22.374 ] 00:23:22.374 } 00:23:22.374 ] 00:23:22.374 }' 00:23:22.374 08:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:22.374 08:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:23:22.374 08:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:23:22.374 08:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:23:22.374 08:22:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2028488 00:23:30.516 Initializing NVMe Controllers 00:23:30.516 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:30.516 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:30.516 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:30.516 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:30.516 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:30.516 Initialization complete. Launching workers. 00:23:30.516 ======================================================== 00:23:30.516 Latency(us) 00:23:30.516 Device Information : IOPS MiB/s Average min max 00:23:30.516 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10020.25 39.14 6386.99 1135.04 52796.02 00:23:30.516 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9030.05 35.27 7087.21 1270.50 52903.40 00:23:30.516 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9445.55 36.90 6776.12 1372.20 53361.61 00:23:30.516 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8657.75 33.82 7399.15 1108.85 52937.62 00:23:30.516 ======================================================== 00:23:30.516 Total : 37153.60 145.13 6891.96 1108.85 53361.61 00:23:30.516 00:23:30.516 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:23:30.516 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:30.516 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:23:30.517 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:30.517 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:23:30.517 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:30.517 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:30.517 rmmod nvme_tcp 00:23:30.517 rmmod nvme_fabrics 00:23:30.517 rmmod nvme_keyring 00:23:30.517 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:30.517 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:23:30.517 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:23:30.517 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2028135 ']' 00:23:30.517 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2028135 00:23:30.517 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2028135 ']' 00:23:30.517 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2028135 00:23:30.517 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:23:30.517 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:30.517 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2028135 00:23:30.517 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:30.517 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:30.517 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2028135' 00:23:30.517 killing process with pid 2028135 00:23:30.517 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2028135 00:23:30.517 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2028135 00:23:30.517 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:30.517 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:30.517 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:30.517 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:23:30.517 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:23:30.517 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:30.517 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:23:30.517 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:30.517 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:30.517 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.517 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.517 08:22:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.430 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:32.430 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:23:32.430 00:23:32.430 real 0m52.889s 00:23:32.430 user 2m49.784s 00:23:32.430 sys 0m11.287s 00:23:32.430 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:32.430 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:32.430 ************************************ 00:23:32.430 END TEST nvmf_perf_adq 00:23:32.430 ************************************ 00:23:32.430 08:22:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:32.430 08:22:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:32.430 08:22:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:32.430 08:22:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:32.430 ************************************ 00:23:32.430 START TEST nvmf_shutdown 00:23:32.430 ************************************ 00:23:32.430 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:32.692 * Looking for test storage... 00:23:32.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:32.692 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:32.692 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:23:32.692 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:32.692 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:32.692 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:32.692 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:32.692 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:32.692 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:32.692 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:32.692 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:32.692 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:32.692 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:32.692 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:32.692 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:32.692 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:32.692 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:32.692 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:23:32.692 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:32.692 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:32.692 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:32.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.693 --rc genhtml_branch_coverage=1 00:23:32.693 --rc genhtml_function_coverage=1 00:23:32.693 --rc genhtml_legend=1 00:23:32.693 --rc geninfo_all_blocks=1 00:23:32.693 --rc geninfo_unexecuted_blocks=1 00:23:32.693 00:23:32.693 ' 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:32.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.693 --rc genhtml_branch_coverage=1 00:23:32.693 --rc genhtml_function_coverage=1 00:23:32.693 --rc genhtml_legend=1 00:23:32.693 --rc geninfo_all_blocks=1 00:23:32.693 --rc geninfo_unexecuted_blocks=1 00:23:32.693 00:23:32.693 ' 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:32.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.693 --rc genhtml_branch_coverage=1 00:23:32.693 --rc genhtml_function_coverage=1 00:23:32.693 --rc genhtml_legend=1 00:23:32.693 --rc geninfo_all_blocks=1 00:23:32.693 --rc geninfo_unexecuted_blocks=1 00:23:32.693 00:23:32.693 ' 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:32.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.693 --rc genhtml_branch_coverage=1 00:23:32.693 --rc genhtml_function_coverage=1 00:23:32.693 --rc genhtml_legend=1 00:23:32.693 --rc geninfo_all_blocks=1 00:23:32.693 --rc geninfo_unexecuted_blocks=1 00:23:32.693 00:23:32.693 ' 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:32.693 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:32.693 ************************************ 00:23:32.693 START TEST nvmf_shutdown_tc1 00:23:32.693 ************************************ 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:32.693 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:32.694 08:22:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:40.838 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:40.838 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:40.838 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:40.838 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:40.838 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:40.838 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:40.838 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:40.838 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:23:40.838 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:40.838 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:23:40.838 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:23:40.838 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:23:40.838 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:23:40.838 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:40.839 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:40.839 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:40.839 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:40.839 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:40.839 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:40.839 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:23:40.839 00:23:40.839 --- 10.0.0.2 ping statistics --- 00:23:40.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.839 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:40.839 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:40.839 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:23:40.839 00:23:40.839 --- 10.0.0.1 ping statistics --- 00:23:40.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.839 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:40.839 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:23:40.840 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:40.840 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:40.840 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:40.840 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:40.840 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:40.840 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:40.840 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:40.840 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:40.840 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:40.840 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:40.840 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:40.840 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2034710 00:23:40.840 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2034710 00:23:40.840 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:40.840 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2034710 ']' 00:23:40.840 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.840 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:40.840 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.840 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:40.840 08:22:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:40.840 [2024-11-28 08:22:37.514248] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:23:40.840 [2024-11-28 08:22:37.514313] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:40.840 [2024-11-28 08:22:37.615354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:40.840 [2024-11-28 08:22:37.667740] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:40.840 [2024-11-28 08:22:37.667796] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:40.840 [2024-11-28 08:22:37.667805] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:40.840 [2024-11-28 08:22:37.667813] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:40.840 [2024-11-28 08:22:37.667820] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:40.840 [2024-11-28 08:22:37.669855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:40.840 [2024-11-28 08:22:37.670016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:40.840 [2024-11-28 08:22:37.670195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:40.840 [2024-11-28 08:22:37.670197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.102 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:41.102 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:41.102 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:41.102 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:41.102 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:41.103 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.103 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:41.103 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.103 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:41.103 [2024-11-28 08:22:38.388183] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.365 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.365 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:41.365 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:41.365 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:41.365 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:41.365 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:41.365 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:41.365 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:41.365 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:41.365 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:41.365 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:41.365 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:41.365 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:41.365 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:41.365 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:41.365 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:41.365 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:41.365 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:41.365 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:41.365 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:41.365 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:41.365 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:41.365 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:41.365 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:41.365 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:41.365 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:41.365 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:41.365 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.365 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:41.365 Malloc1 00:23:41.365 [2024-11-28 08:22:38.514607] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.365 Malloc2 00:23:41.365 Malloc3 00:23:41.365 Malloc4 00:23:41.627 Malloc5 00:23:41.627 Malloc6 00:23:41.627 Malloc7 00:23:41.627 Malloc8 00:23:41.627 Malloc9 00:23:41.627 Malloc10 00:23:41.889 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.889 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:41.889 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:41.889 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:41.889 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2035004 00:23:41.889 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2035004 /var/tmp/bdevperf.sock 00:23:41.889 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2035004 ']' 00:23:41.889 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:41.889 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:41.889 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:41.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:41.889 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:41.889 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:41.889 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:41.889 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:41.889 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:41.889 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:41.889 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:41.889 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:41.889 { 00:23:41.889 "params": { 00:23:41.890 "name": "Nvme$subsystem", 00:23:41.890 "trtype": "$TEST_TRANSPORT", 00:23:41.890 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.890 "adrfam": "ipv4", 00:23:41.890 "trsvcid": "$NVMF_PORT", 00:23:41.890 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.890 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.890 "hdgst": ${hdgst:-false}, 00:23:41.890 "ddgst": ${ddgst:-false} 00:23:41.890 }, 00:23:41.890 "method": "bdev_nvme_attach_controller" 00:23:41.890 } 00:23:41.890 EOF 00:23:41.890 )") 00:23:41.890 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:41.890 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:41.890 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:41.890 { 00:23:41.890 "params": { 00:23:41.890 "name": "Nvme$subsystem", 00:23:41.890 "trtype": "$TEST_TRANSPORT", 00:23:41.890 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.890 "adrfam": "ipv4", 00:23:41.890 "trsvcid": "$NVMF_PORT", 00:23:41.890 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.890 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.890 "hdgst": ${hdgst:-false}, 00:23:41.890 "ddgst": ${ddgst:-false} 00:23:41.890 }, 00:23:41.890 "method": "bdev_nvme_attach_controller" 00:23:41.890 } 00:23:41.890 EOF 00:23:41.890 )") 00:23:41.890 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:41.890 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:41.890 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:41.890 { 00:23:41.890 "params": { 00:23:41.890 "name": "Nvme$subsystem", 00:23:41.890 "trtype": "$TEST_TRANSPORT", 00:23:41.890 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.890 "adrfam": "ipv4", 00:23:41.890 "trsvcid": "$NVMF_PORT", 00:23:41.890 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.890 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.890 "hdgst": ${hdgst:-false}, 00:23:41.890 "ddgst": ${ddgst:-false} 00:23:41.890 }, 00:23:41.890 "method": "bdev_nvme_attach_controller" 00:23:41.890 } 00:23:41.890 EOF 00:23:41.890 )") 00:23:41.890 08:22:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:41.890 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:41.890 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:41.890 { 00:23:41.890 "params": { 00:23:41.890 "name": "Nvme$subsystem", 00:23:41.890 "trtype": "$TEST_TRANSPORT", 00:23:41.890 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.890 "adrfam": "ipv4", 00:23:41.890 "trsvcid": "$NVMF_PORT", 00:23:41.890 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.890 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.890 "hdgst": ${hdgst:-false}, 00:23:41.890 "ddgst": ${ddgst:-false} 00:23:41.890 }, 00:23:41.890 "method": "bdev_nvme_attach_controller" 00:23:41.890 } 00:23:41.890 EOF 00:23:41.890 )") 00:23:41.890 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:41.890 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:41.890 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:41.890 { 00:23:41.890 "params": { 00:23:41.890 "name": "Nvme$subsystem", 00:23:41.890 "trtype": "$TEST_TRANSPORT", 00:23:41.890 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.890 "adrfam": "ipv4", 00:23:41.890 "trsvcid": "$NVMF_PORT", 00:23:41.890 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.890 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.890 "hdgst": ${hdgst:-false}, 00:23:41.890 "ddgst": ${ddgst:-false} 00:23:41.890 }, 00:23:41.890 "method": "bdev_nvme_attach_controller" 00:23:41.890 } 00:23:41.890 EOF 00:23:41.890 )") 00:23:41.890 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:41.890 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:41.890 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:41.890 { 00:23:41.890 "params": { 00:23:41.890 "name": "Nvme$subsystem", 00:23:41.890 "trtype": "$TEST_TRANSPORT", 00:23:41.890 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.890 "adrfam": "ipv4", 00:23:41.890 "trsvcid": "$NVMF_PORT", 00:23:41.890 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.890 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.890 "hdgst": ${hdgst:-false}, 00:23:41.890 "ddgst": ${ddgst:-false} 00:23:41.890 }, 00:23:41.890 "method": "bdev_nvme_attach_controller" 00:23:41.890 } 00:23:41.890 EOF 00:23:41.890 )") 00:23:41.890 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:41.890 [2024-11-28 08:22:39.026774] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:23:41.890 [2024-11-28 08:22:39.026847] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:41.890 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:41.890 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:41.890 { 00:23:41.890 "params": { 00:23:41.890 "name": "Nvme$subsystem", 00:23:41.890 "trtype": "$TEST_TRANSPORT", 00:23:41.890 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.890 "adrfam": "ipv4", 00:23:41.890 "trsvcid": "$NVMF_PORT", 00:23:41.890 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.890 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.890 "hdgst": ${hdgst:-false}, 00:23:41.890 "ddgst": ${ddgst:-false} 00:23:41.890 }, 00:23:41.890 "method": "bdev_nvme_attach_controller" 00:23:41.890 } 00:23:41.890 EOF 00:23:41.890 )") 00:23:41.890 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:41.890 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:41.890 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:41.890 { 00:23:41.890 "params": { 00:23:41.890 "name": "Nvme$subsystem", 00:23:41.890 "trtype": "$TEST_TRANSPORT", 00:23:41.890 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.890 "adrfam": "ipv4", 00:23:41.891 "trsvcid": "$NVMF_PORT", 00:23:41.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.891 "hdgst": ${hdgst:-false}, 00:23:41.891 "ddgst": ${ddgst:-false} 00:23:41.891 }, 00:23:41.891 "method": "bdev_nvme_attach_controller" 00:23:41.891 } 00:23:41.891 EOF 00:23:41.891 )") 00:23:41.891 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:41.891 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:41.891 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:41.891 { 00:23:41.891 "params": { 00:23:41.891 "name": "Nvme$subsystem", 00:23:41.891 "trtype": "$TEST_TRANSPORT", 00:23:41.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.891 "adrfam": "ipv4", 00:23:41.891 "trsvcid": "$NVMF_PORT", 00:23:41.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.891 "hdgst": ${hdgst:-false}, 00:23:41.891 "ddgst": ${ddgst:-false} 00:23:41.891 }, 00:23:41.891 "method": "bdev_nvme_attach_controller" 00:23:41.891 } 00:23:41.891 EOF 00:23:41.891 )") 00:23:41.891 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:41.891 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:41.891 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:41.891 { 00:23:41.891 "params": { 00:23:41.891 "name": "Nvme$subsystem", 00:23:41.891 "trtype": "$TEST_TRANSPORT", 00:23:41.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.891 "adrfam": "ipv4", 00:23:41.891 "trsvcid": "$NVMF_PORT", 00:23:41.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.891 "hdgst": ${hdgst:-false}, 00:23:41.891 "ddgst": ${ddgst:-false} 00:23:41.891 }, 00:23:41.891 "method": "bdev_nvme_attach_controller" 00:23:41.891 } 00:23:41.891 EOF 00:23:41.891 )") 00:23:41.891 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:41.891 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:41.891 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:41.891 08:22:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:41.891 "params": { 00:23:41.891 "name": "Nvme1", 00:23:41.891 "trtype": "tcp", 00:23:41.891 "traddr": "10.0.0.2", 00:23:41.891 "adrfam": "ipv4", 00:23:41.891 "trsvcid": "4420", 00:23:41.891 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.891 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:41.891 "hdgst": false, 00:23:41.891 "ddgst": false 00:23:41.891 }, 00:23:41.891 "method": "bdev_nvme_attach_controller" 00:23:41.891 },{ 00:23:41.891 "params": { 00:23:41.891 "name": "Nvme2", 00:23:41.891 "trtype": "tcp", 00:23:41.891 "traddr": "10.0.0.2", 00:23:41.891 "adrfam": "ipv4", 00:23:41.891 "trsvcid": "4420", 00:23:41.891 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:41.891 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:41.891 "hdgst": false, 00:23:41.891 "ddgst": false 00:23:41.891 }, 00:23:41.891 "method": "bdev_nvme_attach_controller" 00:23:41.891 },{ 00:23:41.891 "params": { 00:23:41.891 "name": "Nvme3", 00:23:41.891 "trtype": "tcp", 00:23:41.891 "traddr": "10.0.0.2", 00:23:41.891 "adrfam": "ipv4", 00:23:41.891 "trsvcid": "4420", 00:23:41.891 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:41.891 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:41.891 "hdgst": false, 00:23:41.891 "ddgst": false 00:23:41.891 }, 00:23:41.891 "method": "bdev_nvme_attach_controller" 00:23:41.891 },{ 00:23:41.891 "params": { 00:23:41.891 "name": "Nvme4", 00:23:41.891 "trtype": "tcp", 00:23:41.891 "traddr": "10.0.0.2", 00:23:41.891 "adrfam": "ipv4", 00:23:41.891 "trsvcid": "4420", 00:23:41.891 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:41.891 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:41.891 "hdgst": false, 00:23:41.891 "ddgst": false 00:23:41.891 }, 00:23:41.891 "method": "bdev_nvme_attach_controller" 00:23:41.891 },{ 00:23:41.891 "params": { 00:23:41.891 "name": "Nvme5", 00:23:41.891 "trtype": "tcp", 00:23:41.891 "traddr": "10.0.0.2", 00:23:41.891 "adrfam": "ipv4", 00:23:41.891 "trsvcid": "4420", 00:23:41.891 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:41.891 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:41.891 "hdgst": false, 00:23:41.891 "ddgst": false 00:23:41.891 }, 00:23:41.891 "method": "bdev_nvme_attach_controller" 00:23:41.891 },{ 00:23:41.891 "params": { 00:23:41.891 "name": "Nvme6", 00:23:41.891 "trtype": "tcp", 00:23:41.891 "traddr": "10.0.0.2", 00:23:41.891 "adrfam": "ipv4", 00:23:41.891 "trsvcid": "4420", 00:23:41.891 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:41.891 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:41.891 "hdgst": false, 00:23:41.891 "ddgst": false 00:23:41.891 }, 00:23:41.891 "method": "bdev_nvme_attach_controller" 00:23:41.891 },{ 00:23:41.891 "params": { 00:23:41.891 "name": "Nvme7", 00:23:41.891 "trtype": "tcp", 00:23:41.891 "traddr": "10.0.0.2", 00:23:41.891 "adrfam": "ipv4", 00:23:41.891 "trsvcid": "4420", 00:23:41.891 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:41.891 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:41.891 "hdgst": false, 00:23:41.891 "ddgst": false 00:23:41.891 }, 00:23:41.891 "method": "bdev_nvme_attach_controller" 00:23:41.891 },{ 00:23:41.891 "params": { 00:23:41.891 "name": "Nvme8", 00:23:41.891 "trtype": "tcp", 00:23:41.891 "traddr": "10.0.0.2", 00:23:41.891 "adrfam": "ipv4", 00:23:41.891 "trsvcid": "4420", 00:23:41.891 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:41.891 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:41.891 "hdgst": false, 00:23:41.892 "ddgst": false 00:23:41.892 }, 00:23:41.892 "method": "bdev_nvme_attach_controller" 00:23:41.892 },{ 00:23:41.892 "params": { 00:23:41.892 "name": "Nvme9", 00:23:41.892 "trtype": "tcp", 00:23:41.892 "traddr": "10.0.0.2", 00:23:41.892 "adrfam": "ipv4", 00:23:41.892 "trsvcid": "4420", 00:23:41.892 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:41.892 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:41.892 "hdgst": false, 00:23:41.892 "ddgst": false 00:23:41.892 }, 00:23:41.892 "method": "bdev_nvme_attach_controller" 00:23:41.892 },{ 00:23:41.892 "params": { 00:23:41.892 "name": "Nvme10", 00:23:41.892 "trtype": "tcp", 00:23:41.892 "traddr": "10.0.0.2", 00:23:41.892 "adrfam": "ipv4", 00:23:41.892 "trsvcid": "4420", 00:23:41.892 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:41.892 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:41.892 "hdgst": false, 00:23:41.892 "ddgst": false 00:23:41.892 }, 00:23:41.892 "method": "bdev_nvme_attach_controller" 00:23:41.892 }' 00:23:41.892 [2024-11-28 08:22:39.121609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.892 [2024-11-28 08:22:39.175817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.278 08:22:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:43.278 08:22:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:43.278 08:22:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:43.278 08:22:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.278 08:22:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:43.278 08:22:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.278 08:22:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2035004 00:23:43.278 08:22:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:23:43.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2035004 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:43.278 08:22:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:23:44.220 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2034710 00:23:44.220 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:44.220 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:44.220 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:44.220 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:44.220 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:44.220 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:44.220 { 00:23:44.220 "params": { 00:23:44.220 "name": "Nvme$subsystem", 00:23:44.220 "trtype": "$TEST_TRANSPORT", 00:23:44.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.220 "adrfam": "ipv4", 00:23:44.220 "trsvcid": "$NVMF_PORT", 00:23:44.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.220 "hdgst": ${hdgst:-false}, 00:23:44.220 "ddgst": ${ddgst:-false} 00:23:44.220 }, 00:23:44.220 "method": "bdev_nvme_attach_controller" 00:23:44.220 } 00:23:44.220 EOF 00:23:44.220 )") 00:23:44.220 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:44.220 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:44.220 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:44.220 { 00:23:44.220 "params": { 00:23:44.220 "name": "Nvme$subsystem", 00:23:44.220 "trtype": "$TEST_TRANSPORT", 00:23:44.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.220 "adrfam": "ipv4", 00:23:44.220 "trsvcid": "$NVMF_PORT", 00:23:44.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.220 "hdgst": ${hdgst:-false}, 00:23:44.220 "ddgst": ${ddgst:-false} 00:23:44.220 }, 00:23:44.220 "method": "bdev_nvme_attach_controller" 00:23:44.220 } 00:23:44.220 EOF 00:23:44.220 )") 00:23:44.220 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:44.220 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:44.220 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:44.220 { 00:23:44.220 "params": { 00:23:44.220 "name": "Nvme$subsystem", 00:23:44.220 "trtype": "$TEST_TRANSPORT", 00:23:44.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.220 "adrfam": "ipv4", 00:23:44.220 "trsvcid": "$NVMF_PORT", 00:23:44.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.220 "hdgst": ${hdgst:-false}, 00:23:44.220 "ddgst": ${ddgst:-false} 00:23:44.220 }, 00:23:44.220 "method": "bdev_nvme_attach_controller" 00:23:44.220 } 00:23:44.220 EOF 00:23:44.220 )") 00:23:44.220 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:44.220 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:44.220 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:44.220 { 00:23:44.220 "params": { 00:23:44.220 "name": "Nvme$subsystem", 00:23:44.220 "trtype": "$TEST_TRANSPORT", 00:23:44.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.220 "adrfam": "ipv4", 00:23:44.220 "trsvcid": "$NVMF_PORT", 00:23:44.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.220 "hdgst": ${hdgst:-false}, 00:23:44.220 "ddgst": ${ddgst:-false} 00:23:44.220 }, 00:23:44.220 "method": "bdev_nvme_attach_controller" 00:23:44.220 } 00:23:44.220 EOF 00:23:44.220 )") 00:23:44.220 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:44.220 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:44.220 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:44.220 { 00:23:44.220 "params": { 00:23:44.220 "name": "Nvme$subsystem", 00:23:44.220 "trtype": "$TEST_TRANSPORT", 00:23:44.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.220 "adrfam": "ipv4", 00:23:44.220 "trsvcid": "$NVMF_PORT", 00:23:44.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.221 "hdgst": ${hdgst:-false}, 00:23:44.221 "ddgst": ${ddgst:-false} 00:23:44.221 }, 00:23:44.221 "method": "bdev_nvme_attach_controller" 00:23:44.221 } 00:23:44.221 EOF 00:23:44.221 )") 00:23:44.221 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:44.221 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:44.221 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:44.221 { 00:23:44.221 "params": { 00:23:44.221 "name": "Nvme$subsystem", 00:23:44.221 "trtype": "$TEST_TRANSPORT", 00:23:44.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.221 "adrfam": "ipv4", 00:23:44.221 "trsvcid": "$NVMF_PORT", 00:23:44.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.221 "hdgst": ${hdgst:-false}, 00:23:44.221 "ddgst": ${ddgst:-false} 00:23:44.221 }, 00:23:44.221 "method": "bdev_nvme_attach_controller" 00:23:44.221 } 00:23:44.221 EOF 00:23:44.221 )") 00:23:44.221 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:44.221 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:44.221 [2024-11-28 08:22:41.498713] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:23:44.221 [2024-11-28 08:22:41.498767] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2035651 ] 00:23:44.221 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:44.221 { 00:23:44.221 "params": { 00:23:44.221 "name": "Nvme$subsystem", 00:23:44.221 "trtype": "$TEST_TRANSPORT", 00:23:44.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.221 "adrfam": "ipv4", 00:23:44.221 "trsvcid": "$NVMF_PORT", 00:23:44.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.221 "hdgst": ${hdgst:-false}, 00:23:44.221 "ddgst": ${ddgst:-false} 00:23:44.221 }, 00:23:44.221 "method": "bdev_nvme_attach_controller" 00:23:44.221 } 00:23:44.221 EOF 00:23:44.221 )") 00:23:44.221 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:44.221 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:44.221 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:44.221 { 00:23:44.221 "params": { 00:23:44.221 "name": "Nvme$subsystem", 00:23:44.221 "trtype": "$TEST_TRANSPORT", 00:23:44.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.221 "adrfam": "ipv4", 00:23:44.221 "trsvcid": "$NVMF_PORT", 00:23:44.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.221 "hdgst": ${hdgst:-false}, 00:23:44.221 "ddgst": ${ddgst:-false} 00:23:44.221 }, 00:23:44.221 "method": "bdev_nvme_attach_controller" 00:23:44.221 } 00:23:44.221 EOF 00:23:44.221 )") 00:23:44.482 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:44.482 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:44.482 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:44.482 { 00:23:44.482 "params": { 00:23:44.482 "name": "Nvme$subsystem", 00:23:44.482 "trtype": "$TEST_TRANSPORT", 00:23:44.482 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.482 "adrfam": "ipv4", 00:23:44.482 "trsvcid": "$NVMF_PORT", 00:23:44.482 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.482 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.482 "hdgst": ${hdgst:-false}, 00:23:44.482 "ddgst": ${ddgst:-false} 00:23:44.482 }, 00:23:44.482 "method": "bdev_nvme_attach_controller" 00:23:44.482 } 00:23:44.482 EOF 00:23:44.482 )") 00:23:44.482 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:44.482 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:44.482 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:44.482 { 00:23:44.482 "params": { 00:23:44.482 "name": "Nvme$subsystem", 00:23:44.482 "trtype": "$TEST_TRANSPORT", 00:23:44.482 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.482 "adrfam": "ipv4", 00:23:44.482 "trsvcid": "$NVMF_PORT", 00:23:44.482 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.482 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.482 "hdgst": ${hdgst:-false}, 00:23:44.482 "ddgst": ${ddgst:-false} 00:23:44.482 }, 00:23:44.482 "method": "bdev_nvme_attach_controller" 00:23:44.482 } 00:23:44.482 EOF 00:23:44.482 )") 00:23:44.482 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:44.482 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:44.482 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:44.482 08:22:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:44.482 "params": { 00:23:44.482 "name": "Nvme1", 00:23:44.482 "trtype": "tcp", 00:23:44.482 "traddr": "10.0.0.2", 00:23:44.482 "adrfam": "ipv4", 00:23:44.482 "trsvcid": "4420", 00:23:44.482 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:44.482 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:44.482 "hdgst": false, 00:23:44.482 "ddgst": false 00:23:44.482 }, 00:23:44.482 "method": "bdev_nvme_attach_controller" 00:23:44.482 },{ 00:23:44.482 "params": { 00:23:44.482 "name": "Nvme2", 00:23:44.482 "trtype": "tcp", 00:23:44.482 "traddr": "10.0.0.2", 00:23:44.482 "adrfam": "ipv4", 00:23:44.482 "trsvcid": "4420", 00:23:44.482 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:44.482 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:44.482 "hdgst": false, 00:23:44.482 "ddgst": false 00:23:44.482 }, 00:23:44.482 "method": "bdev_nvme_attach_controller" 00:23:44.482 },{ 00:23:44.482 "params": { 00:23:44.482 "name": "Nvme3", 00:23:44.482 "trtype": "tcp", 00:23:44.483 "traddr": "10.0.0.2", 00:23:44.483 "adrfam": "ipv4", 00:23:44.483 "trsvcid": "4420", 00:23:44.483 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:44.483 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:44.483 "hdgst": false, 00:23:44.483 "ddgst": false 00:23:44.483 }, 00:23:44.483 "method": "bdev_nvme_attach_controller" 00:23:44.483 },{ 00:23:44.483 "params": { 00:23:44.483 "name": "Nvme4", 00:23:44.483 "trtype": "tcp", 00:23:44.483 "traddr": "10.0.0.2", 00:23:44.483 "adrfam": "ipv4", 00:23:44.483 "trsvcid": "4420", 00:23:44.483 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:44.483 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:44.483 "hdgst": false, 00:23:44.483 "ddgst": false 00:23:44.483 }, 00:23:44.483 "method": "bdev_nvme_attach_controller" 00:23:44.483 },{ 00:23:44.483 "params": { 00:23:44.483 "name": "Nvme5", 00:23:44.483 "trtype": "tcp", 00:23:44.483 "traddr": "10.0.0.2", 00:23:44.483 "adrfam": "ipv4", 00:23:44.483 "trsvcid": "4420", 00:23:44.483 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:44.483 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:44.483 "hdgst": false, 00:23:44.483 "ddgst": false 00:23:44.483 }, 00:23:44.483 "method": "bdev_nvme_attach_controller" 00:23:44.483 },{ 00:23:44.483 "params": { 00:23:44.483 "name": "Nvme6", 00:23:44.483 "trtype": "tcp", 00:23:44.483 "traddr": "10.0.0.2", 00:23:44.483 "adrfam": "ipv4", 00:23:44.483 "trsvcid": "4420", 00:23:44.483 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:44.483 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:44.483 "hdgst": false, 00:23:44.483 "ddgst": false 00:23:44.483 }, 00:23:44.483 "method": "bdev_nvme_attach_controller" 00:23:44.483 },{ 00:23:44.483 "params": { 00:23:44.483 "name": "Nvme7", 00:23:44.483 "trtype": "tcp", 00:23:44.483 "traddr": "10.0.0.2", 00:23:44.483 "adrfam": "ipv4", 00:23:44.483 "trsvcid": "4420", 00:23:44.483 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:44.483 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:44.483 "hdgst": false, 00:23:44.483 "ddgst": false 00:23:44.483 }, 00:23:44.483 "method": "bdev_nvme_attach_controller" 00:23:44.483 },{ 00:23:44.483 "params": { 00:23:44.483 "name": "Nvme8", 00:23:44.483 "trtype": "tcp", 00:23:44.483 "traddr": "10.0.0.2", 00:23:44.483 "adrfam": "ipv4", 00:23:44.483 "trsvcid": "4420", 00:23:44.483 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:44.483 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:44.483 "hdgst": false, 00:23:44.483 "ddgst": false 00:23:44.483 }, 00:23:44.483 "method": "bdev_nvme_attach_controller" 00:23:44.483 },{ 00:23:44.483 "params": { 00:23:44.483 "name": "Nvme9", 00:23:44.483 "trtype": "tcp", 00:23:44.483 "traddr": "10.0.0.2", 00:23:44.483 "adrfam": "ipv4", 00:23:44.483 "trsvcid": "4420", 00:23:44.483 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:44.483 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:44.483 "hdgst": false, 00:23:44.483 "ddgst": false 00:23:44.483 }, 00:23:44.483 "method": "bdev_nvme_attach_controller" 00:23:44.483 },{ 00:23:44.483 "params": { 00:23:44.483 "name": "Nvme10", 00:23:44.483 "trtype": "tcp", 00:23:44.483 "traddr": "10.0.0.2", 00:23:44.483 "adrfam": "ipv4", 00:23:44.483 "trsvcid": "4420", 00:23:44.483 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:44.483 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:44.483 "hdgst": false, 00:23:44.483 "ddgst": false 00:23:44.483 }, 00:23:44.483 "method": "bdev_nvme_attach_controller" 00:23:44.483 }' 00:23:44.483 [2024-11-28 08:22:41.588851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.483 [2024-11-28 08:22:41.624808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.867 Running I/O for 1 seconds... 00:23:47.071 1860.00 IOPS, 116.25 MiB/s 00:23:47.071 Latency(us) 00:23:47.071 [2024-11-28T07:22:44.360Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:47.071 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.071 Verification LBA range: start 0x0 length 0x400 00:23:47.071 Nvme1n1 : 1.06 242.35 15.15 0.00 0.00 261357.55 13653.33 246415.36 00:23:47.071 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.071 Verification LBA range: start 0x0 length 0x400 00:23:47.071 Nvme2n1 : 1.14 223.97 14.00 0.00 0.00 278177.49 19660.80 248162.99 00:23:47.071 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.071 Verification LBA range: start 0x0 length 0x400 00:23:47.071 Nvme3n1 : 1.06 245.90 15.37 0.00 0.00 246988.54 6444.37 232434.35 00:23:47.071 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.071 Verification LBA range: start 0x0 length 0x400 00:23:47.071 Nvme4n1 : 1.15 223.38 13.96 0.00 0.00 269417.17 14636.37 248162.99 00:23:47.071 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.071 Verification LBA range: start 0x0 length 0x400 00:23:47.071 Nvme5n1 : 1.19 269.69 16.86 0.00 0.00 219525.80 18677.76 249910.61 00:23:47.071 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.071 Verification LBA range: start 0x0 length 0x400 00:23:47.071 Nvme6n1 : 1.14 225.37 14.09 0.00 0.00 257377.28 19551.57 246415.36 00:23:47.071 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.071 Verification LBA range: start 0x0 length 0x400 00:23:47.071 Nvme7n1 : 1.19 269.47 16.84 0.00 0.00 212110.76 12342.61 219327.15 00:23:47.071 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.071 Verification LBA range: start 0x0 length 0x400 00:23:47.071 Nvme8n1 : 1.15 222.32 13.90 0.00 0.00 251882.24 27634.35 244667.73 00:23:47.071 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.071 Verification LBA range: start 0x0 length 0x400 00:23:47.071 Nvme9n1 : 1.18 271.27 16.95 0.00 0.00 203274.75 16274.77 234181.97 00:23:47.071 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.071 Verification LBA range: start 0x0 length 0x400 00:23:47.071 Nvme10n1 : 1.20 266.86 16.68 0.00 0.00 202746.92 6280.53 270882.13 00:23:47.071 [2024-11-28T07:22:44.360Z] =================================================================================================================== 00:23:47.071 [2024-11-28T07:22:44.360Z] Total : 2460.59 153.79 0.00 0.00 237492.86 6280.53 270882.13 00:23:47.071 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:23:47.071 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:47.071 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:47.071 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:47.071 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:47.071 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:47.071 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:23:47.071 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:47.071 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:23:47.071 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:47.071 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:47.071 rmmod nvme_tcp 00:23:47.071 rmmod nvme_fabrics 00:23:47.071 rmmod nvme_keyring 00:23:47.071 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:47.071 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:23:47.071 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:23:47.071 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2034710 ']' 00:23:47.071 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2034710 00:23:47.071 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2034710 ']' 00:23:47.071 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2034710 00:23:47.071 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:23:47.071 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:47.071 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2034710 00:23:47.332 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:47.332 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:47.332 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2034710' 00:23:47.332 killing process with pid 2034710 00:23:47.332 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2034710 00:23:47.332 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2034710 00:23:47.332 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:47.332 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:47.332 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:47.332 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:23:47.594 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:23:47.594 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:47.594 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:23:47.594 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:47.594 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:47.594 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.594 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:47.594 08:22:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.509 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:49.509 00:23:49.509 real 0m16.806s 00:23:49.509 user 0m33.774s 00:23:49.509 sys 0m6.961s 00:23:49.509 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:49.509 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:49.509 ************************************ 00:23:49.509 END TEST nvmf_shutdown_tc1 00:23:49.509 ************************************ 00:23:49.509 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:49.509 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:49.509 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:49.509 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:49.509 ************************************ 00:23:49.509 START TEST nvmf_shutdown_tc2 00:23:49.509 ************************************ 00:23:49.509 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:23:49.509 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:23:49.509 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:49.509 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:49.509 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:49.509 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:49.509 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:49.509 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:49.509 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.509 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:49.509 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.770 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:49.770 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:49.770 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:49.770 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:49.770 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:49.770 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:49.770 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:49.770 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:49.770 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:49.770 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:49.770 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:49.770 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:23:49.770 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:49.770 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:23:49.770 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:23:49.770 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:23:49.770 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:23:49.770 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:23:49.770 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:49.770 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:49.770 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:49.770 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:49.770 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:49.770 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:49.770 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:49.770 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:49.770 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:49.770 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:49.770 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:49.770 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:49.770 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:49.770 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:49.770 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:49.770 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:49.771 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:49.771 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:49.771 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:49.771 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:49.771 08:22:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:50.032 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:50.032 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:50.032 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:50.032 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:50.032 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:50.032 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:23:50.032 00:23:50.032 --- 10.0.0.2 ping statistics --- 00:23:50.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.032 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:23:50.032 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:50.032 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:50.032 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:23:50.032 00:23:50.032 --- 10.0.0.1 ping statistics --- 00:23:50.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.032 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:23:50.032 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:50.032 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:23:50.032 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:50.032 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:50.032 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:50.032 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:50.032 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:50.032 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:50.032 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:50.032 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:50.032 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:50.032 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:50.032 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:50.032 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2036804 00:23:50.032 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2036804 00:23:50.032 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:50.032 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2036804 ']' 00:23:50.032 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.032 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:50.032 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.032 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:50.032 08:22:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:50.032 [2024-11-28 08:22:47.227147] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:23:50.032 [2024-11-28 08:22:47.227210] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.293 [2024-11-28 08:22:47.323136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:50.293 [2024-11-28 08:22:47.353454] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:50.293 [2024-11-28 08:22:47.353484] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:50.293 [2024-11-28 08:22:47.353492] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:50.293 [2024-11-28 08:22:47.353497] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:50.293 [2024-11-28 08:22:47.353501] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:50.293 [2024-11-28 08:22:47.354938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:50.293 [2024-11-28 08:22:47.355054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:50.294 [2024-11-28 08:22:47.355065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:50.294 [2024-11-28 08:22:47.355071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:50.865 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:50.865 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:50.865 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:50.865 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:50.865 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:50.865 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:50.865 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:50.865 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.865 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:50.865 [2024-11-28 08:22:48.076647] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:50.865 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.865 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:50.865 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:50.865 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:50.865 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:50.865 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:50.865 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:50.865 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:50.865 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:50.865 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:50.865 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:50.865 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:50.865 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:50.865 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:50.865 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:50.865 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:50.865 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:50.866 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:50.866 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:50.866 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:50.866 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:50.866 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:50.866 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:50.866 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:50.866 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:50.866 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:50.866 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:50.866 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.866 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:51.127 Malloc1 00:23:51.127 [2024-11-28 08:22:48.186093] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.127 Malloc2 00:23:51.127 Malloc3 00:23:51.127 Malloc4 00:23:51.127 Malloc5 00:23:51.127 Malloc6 00:23:51.127 Malloc7 00:23:51.389 Malloc8 00:23:51.389 Malloc9 00:23:51.389 Malloc10 00:23:51.389 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.389 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:51.389 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:51.389 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:51.389 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2037191 00:23:51.389 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2037191 /var/tmp/bdevperf.sock 00:23:51.389 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2037191 ']' 00:23:51.389 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:51.389 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:51.389 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:51.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:51.389 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:51.389 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:51.389 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:51.389 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:51.389 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:23:51.389 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:23:51.389 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:51.389 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:51.389 { 00:23:51.389 "params": { 00:23:51.389 "name": "Nvme$subsystem", 00:23:51.389 "trtype": "$TEST_TRANSPORT", 00:23:51.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.389 "adrfam": "ipv4", 00:23:51.390 "trsvcid": "$NVMF_PORT", 00:23:51.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.390 "hdgst": ${hdgst:-false}, 00:23:51.390 "ddgst": ${ddgst:-false} 00:23:51.390 }, 00:23:51.390 "method": "bdev_nvme_attach_controller" 00:23:51.390 } 00:23:51.390 EOF 00:23:51.390 )") 00:23:51.390 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:51.390 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:51.390 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:51.390 { 00:23:51.390 "params": { 00:23:51.390 "name": "Nvme$subsystem", 00:23:51.390 "trtype": "$TEST_TRANSPORT", 00:23:51.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.390 "adrfam": "ipv4", 00:23:51.390 "trsvcid": "$NVMF_PORT", 00:23:51.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.390 "hdgst": ${hdgst:-false}, 00:23:51.390 "ddgst": ${ddgst:-false} 00:23:51.390 }, 00:23:51.390 "method": "bdev_nvme_attach_controller" 00:23:51.390 } 00:23:51.390 EOF 00:23:51.390 )") 00:23:51.390 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:51.390 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:51.390 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:51.390 { 00:23:51.390 "params": { 00:23:51.390 "name": "Nvme$subsystem", 00:23:51.390 "trtype": "$TEST_TRANSPORT", 00:23:51.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.390 "adrfam": "ipv4", 00:23:51.390 "trsvcid": "$NVMF_PORT", 00:23:51.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.390 "hdgst": ${hdgst:-false}, 00:23:51.390 "ddgst": ${ddgst:-false} 00:23:51.390 }, 00:23:51.390 "method": "bdev_nvme_attach_controller" 00:23:51.390 } 00:23:51.390 EOF 00:23:51.390 )") 00:23:51.390 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:51.390 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:51.390 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:51.390 { 00:23:51.390 "params": { 00:23:51.390 "name": "Nvme$subsystem", 00:23:51.390 "trtype": "$TEST_TRANSPORT", 00:23:51.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.390 "adrfam": "ipv4", 00:23:51.390 "trsvcid": "$NVMF_PORT", 00:23:51.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.390 "hdgst": ${hdgst:-false}, 00:23:51.390 "ddgst": ${ddgst:-false} 00:23:51.390 }, 00:23:51.390 "method": "bdev_nvme_attach_controller" 00:23:51.390 } 00:23:51.390 EOF 00:23:51.390 )") 00:23:51.390 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:51.390 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:51.390 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:51.390 { 00:23:51.390 "params": { 00:23:51.390 "name": "Nvme$subsystem", 00:23:51.390 "trtype": "$TEST_TRANSPORT", 00:23:51.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.390 "adrfam": "ipv4", 00:23:51.390 "trsvcid": "$NVMF_PORT", 00:23:51.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.390 "hdgst": ${hdgst:-false}, 00:23:51.390 "ddgst": ${ddgst:-false} 00:23:51.390 }, 00:23:51.390 "method": "bdev_nvme_attach_controller" 00:23:51.390 } 00:23:51.390 EOF 00:23:51.390 )") 00:23:51.390 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:51.390 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:51.390 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:51.390 { 00:23:51.390 "params": { 00:23:51.390 "name": "Nvme$subsystem", 00:23:51.390 "trtype": "$TEST_TRANSPORT", 00:23:51.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.390 "adrfam": "ipv4", 00:23:51.390 "trsvcid": "$NVMF_PORT", 00:23:51.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.390 "hdgst": ${hdgst:-false}, 00:23:51.390 "ddgst": ${ddgst:-false} 00:23:51.390 }, 00:23:51.390 "method": "bdev_nvme_attach_controller" 00:23:51.390 } 00:23:51.390 EOF 00:23:51.390 )") 00:23:51.390 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:51.390 [2024-11-28 08:22:48.632916] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:23:51.390 [2024-11-28 08:22:48.632971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2037191 ] 00:23:51.390 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:51.390 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:51.390 { 00:23:51.390 "params": { 00:23:51.390 "name": "Nvme$subsystem", 00:23:51.390 "trtype": "$TEST_TRANSPORT", 00:23:51.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.390 "adrfam": "ipv4", 00:23:51.390 "trsvcid": "$NVMF_PORT", 00:23:51.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.390 "hdgst": ${hdgst:-false}, 00:23:51.390 "ddgst": ${ddgst:-false} 00:23:51.390 }, 00:23:51.390 "method": "bdev_nvme_attach_controller" 00:23:51.390 } 00:23:51.390 EOF 00:23:51.390 )") 00:23:51.390 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:51.390 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:51.390 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:51.390 { 00:23:51.390 "params": { 00:23:51.390 "name": "Nvme$subsystem", 00:23:51.390 "trtype": "$TEST_TRANSPORT", 00:23:51.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.390 "adrfam": "ipv4", 00:23:51.390 "trsvcid": "$NVMF_PORT", 00:23:51.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.390 "hdgst": ${hdgst:-false}, 00:23:51.390 "ddgst": ${ddgst:-false} 00:23:51.390 }, 00:23:51.390 "method": "bdev_nvme_attach_controller" 00:23:51.390 } 00:23:51.390 EOF 00:23:51.390 )") 00:23:51.390 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:51.390 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:51.390 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:51.390 { 00:23:51.390 "params": { 00:23:51.390 "name": "Nvme$subsystem", 00:23:51.390 "trtype": "$TEST_TRANSPORT", 00:23:51.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.390 "adrfam": "ipv4", 00:23:51.390 "trsvcid": "$NVMF_PORT", 00:23:51.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.390 "hdgst": ${hdgst:-false}, 00:23:51.390 "ddgst": ${ddgst:-false} 00:23:51.390 }, 00:23:51.390 "method": "bdev_nvme_attach_controller" 00:23:51.390 } 00:23:51.390 EOF 00:23:51.390 )") 00:23:51.390 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:51.390 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:51.390 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:51.390 { 00:23:51.390 "params": { 00:23:51.390 "name": "Nvme$subsystem", 00:23:51.390 "trtype": "$TEST_TRANSPORT", 00:23:51.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.390 "adrfam": "ipv4", 00:23:51.390 "trsvcid": "$NVMF_PORT", 00:23:51.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.390 "hdgst": ${hdgst:-false}, 00:23:51.390 "ddgst": ${ddgst:-false} 00:23:51.390 }, 00:23:51.390 "method": "bdev_nvme_attach_controller" 00:23:51.390 } 00:23:51.390 EOF 00:23:51.390 )") 00:23:51.390 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:51.390 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:23:51.390 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:23:51.390 08:22:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:51.390 "params": { 00:23:51.390 "name": "Nvme1", 00:23:51.390 "trtype": "tcp", 00:23:51.390 "traddr": "10.0.0.2", 00:23:51.390 "adrfam": "ipv4", 00:23:51.390 "trsvcid": "4420", 00:23:51.390 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.390 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:51.390 "hdgst": false, 00:23:51.390 "ddgst": false 00:23:51.390 }, 00:23:51.390 "method": "bdev_nvme_attach_controller" 00:23:51.390 },{ 00:23:51.390 "params": { 00:23:51.390 "name": "Nvme2", 00:23:51.390 "trtype": "tcp", 00:23:51.390 "traddr": "10.0.0.2", 00:23:51.391 "adrfam": "ipv4", 00:23:51.391 "trsvcid": "4420", 00:23:51.391 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:51.391 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:51.391 "hdgst": false, 00:23:51.391 "ddgst": false 00:23:51.391 }, 00:23:51.391 "method": "bdev_nvme_attach_controller" 00:23:51.391 },{ 00:23:51.391 "params": { 00:23:51.391 "name": "Nvme3", 00:23:51.391 "trtype": "tcp", 00:23:51.391 "traddr": "10.0.0.2", 00:23:51.391 "adrfam": "ipv4", 00:23:51.391 "trsvcid": "4420", 00:23:51.391 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:51.391 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:51.391 "hdgst": false, 00:23:51.391 "ddgst": false 00:23:51.391 }, 00:23:51.391 "method": "bdev_nvme_attach_controller" 00:23:51.391 },{ 00:23:51.391 "params": { 00:23:51.391 "name": "Nvme4", 00:23:51.391 "trtype": "tcp", 00:23:51.391 "traddr": "10.0.0.2", 00:23:51.391 "adrfam": "ipv4", 00:23:51.391 "trsvcid": "4420", 00:23:51.391 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:51.391 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:51.391 "hdgst": false, 00:23:51.391 "ddgst": false 00:23:51.391 }, 00:23:51.391 "method": "bdev_nvme_attach_controller" 00:23:51.391 },{ 00:23:51.391 "params": { 00:23:51.391 "name": "Nvme5", 00:23:51.391 "trtype": "tcp", 00:23:51.391 "traddr": "10.0.0.2", 00:23:51.391 "adrfam": "ipv4", 00:23:51.391 "trsvcid": "4420", 00:23:51.391 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:51.391 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:51.391 "hdgst": false, 00:23:51.391 "ddgst": false 00:23:51.391 }, 00:23:51.391 "method": "bdev_nvme_attach_controller" 00:23:51.391 },{ 00:23:51.391 "params": { 00:23:51.391 "name": "Nvme6", 00:23:51.391 "trtype": "tcp", 00:23:51.391 "traddr": "10.0.0.2", 00:23:51.391 "adrfam": "ipv4", 00:23:51.391 "trsvcid": "4420", 00:23:51.391 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:51.391 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:51.391 "hdgst": false, 00:23:51.391 "ddgst": false 00:23:51.391 }, 00:23:51.391 "method": "bdev_nvme_attach_controller" 00:23:51.391 },{ 00:23:51.391 "params": { 00:23:51.391 "name": "Nvme7", 00:23:51.391 "trtype": "tcp", 00:23:51.391 "traddr": "10.0.0.2", 00:23:51.391 "adrfam": "ipv4", 00:23:51.391 "trsvcid": "4420", 00:23:51.391 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:51.391 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:51.391 "hdgst": false, 00:23:51.391 "ddgst": false 00:23:51.391 }, 00:23:51.391 "method": "bdev_nvme_attach_controller" 00:23:51.391 },{ 00:23:51.391 "params": { 00:23:51.391 "name": "Nvme8", 00:23:51.391 "trtype": "tcp", 00:23:51.391 "traddr": "10.0.0.2", 00:23:51.391 "adrfam": "ipv4", 00:23:51.391 "trsvcid": "4420", 00:23:51.391 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:51.391 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:51.391 "hdgst": false, 00:23:51.391 "ddgst": false 00:23:51.391 }, 00:23:51.391 "method": "bdev_nvme_attach_controller" 00:23:51.391 },{ 00:23:51.391 "params": { 00:23:51.391 "name": "Nvme9", 00:23:51.391 "trtype": "tcp", 00:23:51.391 "traddr": "10.0.0.2", 00:23:51.391 "adrfam": "ipv4", 00:23:51.391 "trsvcid": "4420", 00:23:51.391 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:51.391 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:51.391 "hdgst": false, 00:23:51.391 "ddgst": false 00:23:51.391 }, 00:23:51.391 "method": "bdev_nvme_attach_controller" 00:23:51.391 },{ 00:23:51.391 "params": { 00:23:51.391 "name": "Nvme10", 00:23:51.391 "trtype": "tcp", 00:23:51.391 "traddr": "10.0.0.2", 00:23:51.391 "adrfam": "ipv4", 00:23:51.391 "trsvcid": "4420", 00:23:51.391 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:51.391 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:51.391 "hdgst": false, 00:23:51.391 "ddgst": false 00:23:51.391 }, 00:23:51.391 "method": "bdev_nvme_attach_controller" 00:23:51.391 }' 00:23:51.652 [2024-11-28 08:22:48.722997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.652 [2024-11-28 08:22:48.759536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.037 Running I/O for 10 seconds... 00:23:53.037 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:53.037 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:53.037 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:53.037 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.037 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:53.297 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.297 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:53.297 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:53.297 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:53.297 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:23:53.297 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:23:53.297 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:53.297 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:53.297 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:53.297 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:53.297 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.297 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:53.297 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.297 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:53.297 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:53.297 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:53.558 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:53.558 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:53.558 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:53.558 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:53.558 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.558 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:53.558 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.558 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:53.558 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:53.558 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:53.819 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:53.819 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:53.819 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:53.819 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:53.819 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.819 08:22:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:53.819 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.819 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:53.819 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:53.819 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:23:53.819 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:23:53.819 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:23:53.819 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2037191 00:23:53.819 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2037191 ']' 00:23:53.819 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2037191 00:23:53.819 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:53.819 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:53.819 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2037191 00:23:54.080 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:54.080 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:54.080 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2037191' 00:23:54.080 killing process with pid 2037191 00:23:54.080 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2037191 00:23:54.080 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2037191 00:23:54.080 2312.00 IOPS, 144.50 MiB/s [2024-11-28T07:22:51.369Z] Received shutdown signal, test time was about 1.023010 seconds 00:23:54.080 00:23:54.080 Latency(us) 00:23:54.080 [2024-11-28T07:22:51.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.080 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.080 Verification LBA range: start 0x0 length 0x400 00:23:54.080 Nvme1n1 : 0.98 262.54 16.41 0.00 0.00 240992.21 18677.76 251658.24 00:23:54.080 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.080 Verification LBA range: start 0x0 length 0x400 00:23:54.080 Nvme2n1 : 0.95 211.69 13.23 0.00 0.00 290737.73 4314.45 248162.99 00:23:54.080 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.080 Verification LBA range: start 0x0 length 0x400 00:23:54.080 Nvme3n1 : 0.97 263.30 16.46 0.00 0.00 230763.52 16930.13 228939.09 00:23:54.080 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.080 Verification LBA range: start 0x0 length 0x400 00:23:54.080 Nvme4n1 : 0.98 261.68 16.36 0.00 0.00 227534.08 21626.88 244667.73 00:23:54.080 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.080 Verification LBA range: start 0x0 length 0x400 00:23:54.080 Nvme5n1 : 0.96 200.28 12.52 0.00 0.00 289202.63 16274.77 255153.49 00:23:54.080 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.080 Verification LBA range: start 0x0 length 0x400 00:23:54.080 Nvme6n1 : 0.96 200.08 12.51 0.00 0.00 284217.17 22282.24 258648.75 00:23:54.080 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.080 Verification LBA range: start 0x0 length 0x400 00:23:54.080 Nvme7n1 : 1.02 250.46 15.65 0.00 0.00 214489.60 22391.47 230686.72 00:23:54.080 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.080 Verification LBA range: start 0x0 length 0x400 00:23:54.080 Nvme8n1 : 0.96 265.32 16.58 0.00 0.00 205080.11 18350.08 246415.36 00:23:54.080 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.080 Verification LBA range: start 0x0 length 0x400 00:23:54.080 Nvme9n1 : 0.98 260.10 16.26 0.00 0.00 205277.23 19660.80 251658.24 00:23:54.080 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.080 Verification LBA range: start 0x0 length 0x400 00:23:54.080 Nvme10n1 : 0.97 198.28 12.39 0.00 0.00 262240.14 23483.73 277872.64 00:23:54.080 [2024-11-28T07:22:51.369Z] =================================================================================================================== 00:23:54.080 [2024-11-28T07:22:51.369Z] Total : 2373.74 148.36 0.00 0.00 241207.75 4314.45 277872.64 00:23:54.080 08:22:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2036804 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:55.464 rmmod nvme_tcp 00:23:55.464 rmmod nvme_fabrics 00:23:55.464 rmmod nvme_keyring 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2036804 ']' 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2036804 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2036804 ']' 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2036804 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2036804 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2036804' 00:23:55.464 killing process with pid 2036804 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2036804 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2036804 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:55.464 08:22:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:58.010 00:23:58.010 real 0m8.012s 00:23:58.010 user 0m24.392s 00:23:58.010 sys 0m1.336s 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:58.010 ************************************ 00:23:58.010 END TEST nvmf_shutdown_tc2 00:23:58.010 ************************************ 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:58.010 ************************************ 00:23:58.010 START TEST nvmf_shutdown_tc3 00:23:58.010 ************************************ 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:58.010 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:58.010 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:58.010 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.010 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:58.011 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:58.011 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.011 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:58.011 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:58.011 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:58.011 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:58.011 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:58.011 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:58.011 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:58.011 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:58.011 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:58.011 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:58.011 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:58.011 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:58.011 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:58.011 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:58.011 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:58.011 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:58.011 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:58.011 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:58.011 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:58.011 08:22:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:58.011 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:58.011 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:58.011 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:58.011 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:58.011 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:58.011 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:58.011 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:58.011 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:58.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:58.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:23:58.011 00:23:58.011 --- 10.0.0.2 ping statistics --- 00:23:58.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.011 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:23:58.011 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:58.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:58.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:23:58.011 00:23:58.011 --- 10.0.0.1 ping statistics --- 00:23:58.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.011 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:23:58.011 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:58.011 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:23:58.011 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:58.011 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:58.011 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:58.011 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:58.011 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:58.011 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:58.011 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:58.011 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:58.011 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:58.011 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:58.011 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:58.011 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2038492 00:23:58.011 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2038492 00:23:58.011 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2038492 ']' 00:23:58.011 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:58.011 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.011 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:58.011 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.011 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:58.011 08:22:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:58.271 [2024-11-28 08:22:55.334422] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:23:58.271 [2024-11-28 08:22:55.334511] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:58.271 [2024-11-28 08:22:55.421104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:58.271 [2024-11-28 08:22:55.451706] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:58.271 [2024-11-28 08:22:55.451735] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:58.271 [2024-11-28 08:22:55.451744] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:58.271 [2024-11-28 08:22:55.451748] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:58.271 [2024-11-28 08:22:55.451752] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:58.271 [2024-11-28 08:22:55.453219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:58.271 [2024-11-28 08:22:55.453453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:58.271 [2024-11-28 08:22:55.453601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.271 [2024-11-28 08:22:55.453603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:59.213 [2024-11-28 08:22:56.178834] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.213 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:59.213 Malloc1 00:23:59.213 [2024-11-28 08:22:56.291105] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:59.213 Malloc2 00:23:59.213 Malloc3 00:23:59.213 Malloc4 00:23:59.213 Malloc5 00:23:59.213 Malloc6 00:23:59.213 Malloc7 00:23:59.475 Malloc8 00:23:59.475 Malloc9 00:23:59.475 Malloc10 00:23:59.475 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.475 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:59.475 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:59.475 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:59.475 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2038732 00:23:59.475 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2038732 /var/tmp/bdevperf.sock 00:23:59.475 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2038732 ']' 00:23:59.475 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:59.475 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:59.475 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:59.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:59.475 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:59.475 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:59.475 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:59.475 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:59.475 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:23:59.475 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:23:59.475 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:59.475 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:59.475 { 00:23:59.475 "params": { 00:23:59.475 "name": "Nvme$subsystem", 00:23:59.475 "trtype": "$TEST_TRANSPORT", 00:23:59.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.475 "adrfam": "ipv4", 00:23:59.475 "trsvcid": "$NVMF_PORT", 00:23:59.475 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.475 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.475 "hdgst": ${hdgst:-false}, 00:23:59.475 "ddgst": ${ddgst:-false} 00:23:59.475 }, 00:23:59.475 "method": "bdev_nvme_attach_controller" 00:23:59.475 } 00:23:59.475 EOF 00:23:59.475 )") 00:23:59.475 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:59.475 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:59.475 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:59.475 { 00:23:59.475 "params": { 00:23:59.475 "name": "Nvme$subsystem", 00:23:59.475 "trtype": "$TEST_TRANSPORT", 00:23:59.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.475 "adrfam": "ipv4", 00:23:59.475 "trsvcid": "$NVMF_PORT", 00:23:59.475 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.475 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.475 "hdgst": ${hdgst:-false}, 00:23:59.475 "ddgst": ${ddgst:-false} 00:23:59.475 }, 00:23:59.475 "method": "bdev_nvme_attach_controller" 00:23:59.475 } 00:23:59.475 EOF 00:23:59.475 )") 00:23:59.475 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:59.475 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:59.475 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:59.475 { 00:23:59.475 "params": { 00:23:59.475 "name": "Nvme$subsystem", 00:23:59.475 "trtype": "$TEST_TRANSPORT", 00:23:59.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.475 "adrfam": "ipv4", 00:23:59.475 "trsvcid": "$NVMF_PORT", 00:23:59.475 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.475 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.475 "hdgst": ${hdgst:-false}, 00:23:59.475 "ddgst": ${ddgst:-false} 00:23:59.475 }, 00:23:59.475 "method": "bdev_nvme_attach_controller" 00:23:59.475 } 00:23:59.475 EOF 00:23:59.475 )") 00:23:59.475 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:59.475 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:59.475 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:59.475 { 00:23:59.475 "params": { 00:23:59.475 "name": "Nvme$subsystem", 00:23:59.475 "trtype": "$TEST_TRANSPORT", 00:23:59.475 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.475 "adrfam": "ipv4", 00:23:59.475 "trsvcid": "$NVMF_PORT", 00:23:59.475 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.475 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.475 "hdgst": ${hdgst:-false}, 00:23:59.475 "ddgst": ${ddgst:-false} 00:23:59.475 }, 00:23:59.475 "method": "bdev_nvme_attach_controller" 00:23:59.475 } 00:23:59.475 EOF 00:23:59.475 )") 00:23:59.475 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:59.475 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:59.476 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:59.476 { 00:23:59.476 "params": { 00:23:59.476 "name": "Nvme$subsystem", 00:23:59.476 "trtype": "$TEST_TRANSPORT", 00:23:59.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.476 "adrfam": "ipv4", 00:23:59.476 "trsvcid": "$NVMF_PORT", 00:23:59.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.476 "hdgst": ${hdgst:-false}, 00:23:59.476 "ddgst": ${ddgst:-false} 00:23:59.476 }, 00:23:59.476 "method": "bdev_nvme_attach_controller" 00:23:59.476 } 00:23:59.476 EOF 00:23:59.476 )") 00:23:59.476 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:59.476 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:59.476 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:59.476 { 00:23:59.476 "params": { 00:23:59.476 "name": "Nvme$subsystem", 00:23:59.476 "trtype": "$TEST_TRANSPORT", 00:23:59.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.476 "adrfam": "ipv4", 00:23:59.476 "trsvcid": "$NVMF_PORT", 00:23:59.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.476 "hdgst": ${hdgst:-false}, 00:23:59.476 "ddgst": ${ddgst:-false} 00:23:59.476 }, 00:23:59.476 "method": "bdev_nvme_attach_controller" 00:23:59.476 } 00:23:59.476 EOF 00:23:59.476 )") 00:23:59.476 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:59.476 [2024-11-28 08:22:56.737349] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:23:59.476 [2024-11-28 08:22:56.737404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2038732 ] 00:23:59.476 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:59.476 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:59.476 { 00:23:59.476 "params": { 00:23:59.476 "name": "Nvme$subsystem", 00:23:59.476 "trtype": "$TEST_TRANSPORT", 00:23:59.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.476 "adrfam": "ipv4", 00:23:59.476 "trsvcid": "$NVMF_PORT", 00:23:59.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.476 "hdgst": ${hdgst:-false}, 00:23:59.476 "ddgst": ${ddgst:-false} 00:23:59.476 }, 00:23:59.476 "method": "bdev_nvme_attach_controller" 00:23:59.476 } 00:23:59.476 EOF 00:23:59.476 )") 00:23:59.476 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:59.476 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:59.476 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:59.476 { 00:23:59.476 "params": { 00:23:59.476 "name": "Nvme$subsystem", 00:23:59.476 "trtype": "$TEST_TRANSPORT", 00:23:59.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.476 "adrfam": "ipv4", 00:23:59.476 "trsvcid": "$NVMF_PORT", 00:23:59.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.476 "hdgst": ${hdgst:-false}, 00:23:59.476 "ddgst": ${ddgst:-false} 00:23:59.476 }, 00:23:59.476 "method": "bdev_nvme_attach_controller" 00:23:59.476 } 00:23:59.476 EOF 00:23:59.476 )") 00:23:59.476 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:59.476 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:59.476 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:59.476 { 00:23:59.476 "params": { 00:23:59.476 "name": "Nvme$subsystem", 00:23:59.476 "trtype": "$TEST_TRANSPORT", 00:23:59.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.476 "adrfam": "ipv4", 00:23:59.476 "trsvcid": "$NVMF_PORT", 00:23:59.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.476 "hdgst": ${hdgst:-false}, 00:23:59.476 "ddgst": ${ddgst:-false} 00:23:59.476 }, 00:23:59.476 "method": "bdev_nvme_attach_controller" 00:23:59.476 } 00:23:59.476 EOF 00:23:59.476 )") 00:23:59.476 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:59.737 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:59.737 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:59.737 { 00:23:59.737 "params": { 00:23:59.737 "name": "Nvme$subsystem", 00:23:59.737 "trtype": "$TEST_TRANSPORT", 00:23:59.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.737 "adrfam": "ipv4", 00:23:59.737 "trsvcid": "$NVMF_PORT", 00:23:59.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.737 "hdgst": ${hdgst:-false}, 00:23:59.737 "ddgst": ${ddgst:-false} 00:23:59.737 }, 00:23:59.737 "method": "bdev_nvme_attach_controller" 00:23:59.737 } 00:23:59.737 EOF 00:23:59.737 )") 00:23:59.737 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:59.737 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:23:59.737 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:23:59.737 08:22:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:59.737 "params": { 00:23:59.737 "name": "Nvme1", 00:23:59.737 "trtype": "tcp", 00:23:59.737 "traddr": "10.0.0.2", 00:23:59.737 "adrfam": "ipv4", 00:23:59.737 "trsvcid": "4420", 00:23:59.737 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.737 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:59.737 "hdgst": false, 00:23:59.737 "ddgst": false 00:23:59.737 }, 00:23:59.737 "method": "bdev_nvme_attach_controller" 00:23:59.737 },{ 00:23:59.737 "params": { 00:23:59.737 "name": "Nvme2", 00:23:59.737 "trtype": "tcp", 00:23:59.737 "traddr": "10.0.0.2", 00:23:59.737 "adrfam": "ipv4", 00:23:59.737 "trsvcid": "4420", 00:23:59.737 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:59.737 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:59.737 "hdgst": false, 00:23:59.737 "ddgst": false 00:23:59.737 }, 00:23:59.737 "method": "bdev_nvme_attach_controller" 00:23:59.737 },{ 00:23:59.737 "params": { 00:23:59.737 "name": "Nvme3", 00:23:59.737 "trtype": "tcp", 00:23:59.737 "traddr": "10.0.0.2", 00:23:59.737 "adrfam": "ipv4", 00:23:59.737 "trsvcid": "4420", 00:23:59.737 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:59.737 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:59.737 "hdgst": false, 00:23:59.737 "ddgst": false 00:23:59.737 }, 00:23:59.737 "method": "bdev_nvme_attach_controller" 00:23:59.737 },{ 00:23:59.737 "params": { 00:23:59.737 "name": "Nvme4", 00:23:59.737 "trtype": "tcp", 00:23:59.737 "traddr": "10.0.0.2", 00:23:59.738 "adrfam": "ipv4", 00:23:59.738 "trsvcid": "4420", 00:23:59.738 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:59.738 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:59.738 "hdgst": false, 00:23:59.738 "ddgst": false 00:23:59.738 }, 00:23:59.738 "method": "bdev_nvme_attach_controller" 00:23:59.738 },{ 00:23:59.738 "params": { 00:23:59.738 "name": "Nvme5", 00:23:59.738 "trtype": "tcp", 00:23:59.738 "traddr": "10.0.0.2", 00:23:59.738 "adrfam": "ipv4", 00:23:59.738 "trsvcid": "4420", 00:23:59.738 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:59.738 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:59.738 "hdgst": false, 00:23:59.738 "ddgst": false 00:23:59.738 }, 00:23:59.738 "method": "bdev_nvme_attach_controller" 00:23:59.738 },{ 00:23:59.738 "params": { 00:23:59.738 "name": "Nvme6", 00:23:59.738 "trtype": "tcp", 00:23:59.738 "traddr": "10.0.0.2", 00:23:59.738 "adrfam": "ipv4", 00:23:59.738 "trsvcid": "4420", 00:23:59.738 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:59.738 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:59.738 "hdgst": false, 00:23:59.738 "ddgst": false 00:23:59.738 }, 00:23:59.738 "method": "bdev_nvme_attach_controller" 00:23:59.738 },{ 00:23:59.738 "params": { 00:23:59.738 "name": "Nvme7", 00:23:59.738 "trtype": "tcp", 00:23:59.738 "traddr": "10.0.0.2", 00:23:59.738 "adrfam": "ipv4", 00:23:59.738 "trsvcid": "4420", 00:23:59.738 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:59.738 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:59.738 "hdgst": false, 00:23:59.738 "ddgst": false 00:23:59.738 }, 00:23:59.738 "method": "bdev_nvme_attach_controller" 00:23:59.738 },{ 00:23:59.738 "params": { 00:23:59.738 "name": "Nvme8", 00:23:59.738 "trtype": "tcp", 00:23:59.738 "traddr": "10.0.0.2", 00:23:59.738 "adrfam": "ipv4", 00:23:59.738 "trsvcid": "4420", 00:23:59.738 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:59.738 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:59.738 "hdgst": false, 00:23:59.738 "ddgst": false 00:23:59.738 }, 00:23:59.738 "method": "bdev_nvme_attach_controller" 00:23:59.738 },{ 00:23:59.738 "params": { 00:23:59.738 "name": "Nvme9", 00:23:59.738 "trtype": "tcp", 00:23:59.738 "traddr": "10.0.0.2", 00:23:59.738 "adrfam": "ipv4", 00:23:59.738 "trsvcid": "4420", 00:23:59.738 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:59.738 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:59.738 "hdgst": false, 00:23:59.738 "ddgst": false 00:23:59.738 }, 00:23:59.738 "method": "bdev_nvme_attach_controller" 00:23:59.738 },{ 00:23:59.738 "params": { 00:23:59.738 "name": "Nvme10", 00:23:59.738 "trtype": "tcp", 00:23:59.738 "traddr": "10.0.0.2", 00:23:59.738 "adrfam": "ipv4", 00:23:59.738 "trsvcid": "4420", 00:23:59.738 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:59.738 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:59.738 "hdgst": false, 00:23:59.738 "ddgst": false 00:23:59.738 }, 00:23:59.738 "method": "bdev_nvme_attach_controller" 00:23:59.738 }' 00:23:59.738 [2024-11-28 08:22:56.828206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.738 [2024-11-28 08:22:56.864333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.648 Running I/O for 10 seconds... 00:24:02.228 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:02.228 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:24:02.228 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:02.228 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.228 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:02.228 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.228 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:02.228 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:02.228 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:02.228 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:24:02.228 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:24:02.228 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:24:02.228 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:24:02.228 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:02.228 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:02.228 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:02.228 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.228 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:02.228 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.228 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:24:02.228 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:24:02.228 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:24:02.228 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:24:02.228 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:24:02.228 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2038492 00:24:02.228 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2038492 ']' 00:24:02.228 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2038492 00:24:02.228 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:24:02.228 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:02.228 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2038492 00:24:02.228 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:02.228 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:02.228 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2038492' 00:24:02.229 killing process with pid 2038492 00:24:02.229 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2038492 00:24:02.229 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2038492 00:24:02.229 [2024-11-28 08:22:59.385518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.229 [2024-11-28 08:22:59.385853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.385858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.385862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445810 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.387805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.387832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.387838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.387843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.387848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.387853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.387858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.387862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.387867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.387872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.387877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.387881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.387886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.387891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.387896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.387900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.387905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.387910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.387915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.387919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.387924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.387929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.387934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.387939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.387947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.387953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.387957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.387962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.387967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.387972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.387976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.387981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.387986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.387990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.387995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.388000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.388005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.388010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.388014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.388019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.388023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.388028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.388032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.388037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.388042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.388047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.388052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.388056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.388060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.388065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.388070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.388076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.388080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.388085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.388090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.388095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.388099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.388104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.388109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.388113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.388118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.388122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.388127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1445d00 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.390583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.390599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.230 [2024-11-28 08:22:59.390605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.390886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446b90 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.391832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.391844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.391849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.391854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.391859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.391864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.391868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.231 [2024-11-28 08:22:59.391875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.391880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.391885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.391890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.391894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.391899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.391903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.391909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.391913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.391918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.391922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.391927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.391933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.391937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.391942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.391947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.391951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.391956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.391960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.391965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.391970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.391974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.391979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.391984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.391988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.391993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.391997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.392003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.392008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.392012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.392017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.392021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.392026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.392031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.392036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.392040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.392045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.392050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.392054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.392059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.392064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.392069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.392074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.392079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.392083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.392088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.392094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.392099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.392103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.392108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.392112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.392117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.392122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.392126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.392136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.392140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447060 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.393128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.393150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.393156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.393165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.393169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.393175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.393180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.393185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.393190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.393195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.393200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.393204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.232 [2024-11-28 08:22:59.393209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.393456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447530 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.394097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.394118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.394124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.394128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.394133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.394138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.394143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.394148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.394153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.394161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.394166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.394170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.394175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.394180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.233 [2024-11-28 08:22:59.394184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1447a20 is same with the state(6) to be set 00:24:02.234 [2024-11-28 08:22:59.394862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.394874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.394879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.394884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.394889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.394894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.394898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.394905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.394910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.394915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.394919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.394924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.394929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.394933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.394938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.394943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.394948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.394952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.394956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.394961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.394965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.394970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.394974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.394979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.394984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.394989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.394993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.394998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.395002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.395007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.395011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.395016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.395020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.395025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.395029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.395035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.395040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.395045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.395050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.395054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.395059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.395063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.395068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.395072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.395077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.395081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.395086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.395093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.395099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.395104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.395108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.395113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.395118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.395122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.395127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.395132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.395136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.395141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.395146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.395150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.395155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.395163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.395169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1473520 is same with the state(6) to be set 00:24:02.235 [2024-11-28 08:22:59.399338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.235 [2024-11-28 08:22:59.399372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.235 [2024-11-28 08:22:59.399383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.235 [2024-11-28 08:22:59.399391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.235 [2024-11-28 08:22:59.399400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.236 [2024-11-28 08:22:59.399407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.236 [2024-11-28 08:22:59.399416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.236 [2024-11-28 08:22:59.399423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.236 [2024-11-28 08:22:59.399431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9c610 is same with the state(6) to be set 00:24:02.236 [2024-11-28 08:22:59.399464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.236 [2024-11-28 08:22:59.399474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.236 [2024-11-28 08:22:59.399482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.236 [2024-11-28 08:22:59.399491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.236 [2024-11-28 08:22:59.399499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.236 [2024-11-28 08:22:59.399507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.236 [2024-11-28 08:22:59.399515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.236 [2024-11-28 08:22:59.399523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.236 [2024-11-28 08:22:59.399530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa56a0 is same with the state(6) to be set 00:24:02.236 [2024-11-28 08:22:59.399555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.236 [2024-11-28 08:22:59.399563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.236 [2024-11-28 08:22:59.399572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.236 [2024-11-28 08:22:59.399579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.236 [2024-11-28 08:22:59.399587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.236 [2024-11-28 08:22:59.399595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.236 [2024-11-28 08:22:59.399608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.236 [2024-11-28 08:22:59.399616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.236 [2024-11-28 08:22:59.399626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffcbb0 is same with the state(6) to be set 00:24:02.236 [2024-11-28 08:22:59.399658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.236 [2024-11-28 08:22:59.399667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.236 [2024-11-28 08:22:59.399676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.236 [2024-11-28 08:22:59.399683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.236 [2024-11-28 08:22:59.399691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.236 [2024-11-28 08:22:59.399699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.236 [2024-11-28 08:22:59.399707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.236 [2024-11-28 08:22:59.399714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.236 [2024-11-28 08:22:59.399722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feed50 is same with the state(6) to be set 00:24:02.236 [2024-11-28 08:22:59.399749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.236 [2024-11-28 08:22:59.399758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.236 [2024-11-28 08:22:59.399766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.236 [2024-11-28 08:22:59.399774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.236 [2024-11-28 08:22:59.399782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.236 [2024-11-28 08:22:59.399789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.236 [2024-11-28 08:22:59.399797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.236 [2024-11-28 08:22:59.399805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.236 [2024-11-28 08:22:59.399813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b84850 is same with the state(6) to be set 00:24:02.236 [2024-11-28 08:22:59.399836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.236 [2024-11-28 08:22:59.399845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.236 [2024-11-28 08:22:59.399853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.236 [2024-11-28 08:22:59.399861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.236 [2024-11-28 08:22:59.399869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.236 [2024-11-28 08:22:59.399882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.236 [2024-11-28 08:22:59.399890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.236 [2024-11-28 08:22:59.399898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.236 [2024-11-28 08:22:59.399905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb0980 is same with the state(6) to be set 00:24:02.236 [2024-11-28 08:22:59.399928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.236 [2024-11-28 08:22:59.399937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.236 [2024-11-28 08:22:59.399945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.236 [2024-11-28 08:22:59.399952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.236 [2024-11-28 08:22:59.399961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.236 [2024-11-28 08:22:59.399968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.236 [2024-11-28 08:22:59.399977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.236 [2024-11-28 08:22:59.399984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.236 [2024-11-28 08:22:59.399991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa5c90 is same with the state(6) to be set 00:24:02.236 [2024-11-28 08:22:59.400017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.236 [2024-11-28 08:22:59.400026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.236 [2024-11-28 08:22:59.400035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.236 [2024-11-28 08:22:59.400043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.236 [2024-11-28 08:22:59.400051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.236 [2024-11-28 08:22:59.400058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.236 [2024-11-28 08:22:59.400066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.237 [2024-11-28 08:22:59.400074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.237 [2024-11-28 08:22:59.400081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b84cc0 is same with the state(6) to be set 00:24:02.237 [2024-11-28 08:22:59.400103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.237 [2024-11-28 08:22:59.400112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.237 [2024-11-28 08:22:59.400120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.237 [2024-11-28 08:22:59.400130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.237 [2024-11-28 08:22:59.400138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.237 [2024-11-28 08:22:59.400146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.237 [2024-11-28 08:22:59.400154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.237 [2024-11-28 08:22:59.400168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.237 [2024-11-28 08:22:59.400176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b82fc0 is same with the state(6) to be set 00:24:02.237 [2024-11-28 08:22:59.400200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.237 [2024-11-28 08:22:59.400208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.237 [2024-11-28 08:22:59.400217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.237 [2024-11-28 08:22:59.400224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.237 [2024-11-28 08:22:59.400232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.237 [2024-11-28 08:22:59.400240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.237 [2024-11-28 08:22:59.400249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.237 [2024-11-28 08:22:59.400256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.237 [2024-11-28 08:22:59.400263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb0180 is same with the state(6) to be set 00:24:02.237 [2024-11-28 08:22:59.400700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.237 [2024-11-28 08:22:59.400722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.237 [2024-11-28 08:22:59.400737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.237 [2024-11-28 08:22:59.400745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.237 [2024-11-28 08:22:59.400755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.237 [2024-11-28 08:22:59.400763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.237 [2024-11-28 08:22:59.400773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.237 [2024-11-28 08:22:59.400780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.237 [2024-11-28 08:22:59.400790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.237 [2024-11-28 08:22:59.400797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.237 [2024-11-28 08:22:59.400806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.237 [2024-11-28 08:22:59.400817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.237 [2024-11-28 08:22:59.400827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.237 [2024-11-28 08:22:59.400834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.237 [2024-11-28 08:22:59.400844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.237 [2024-11-28 08:22:59.400851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.237 [2024-11-28 08:22:59.400860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.237 [2024-11-28 08:22:59.400868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.237 [2024-11-28 08:22:59.400877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.237 [2024-11-28 08:22:59.400885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.237 [2024-11-28 08:22:59.400894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.237 [2024-11-28 08:22:59.400902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.237 [2024-11-28 08:22:59.400911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.237 [2024-11-28 08:22:59.400918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.237 [2024-11-28 08:22:59.400927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.237 [2024-11-28 08:22:59.400935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.237 [2024-11-28 08:22:59.400944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.237 [2024-11-28 08:22:59.400951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.237 [2024-11-28 08:22:59.400961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.237 [2024-11-28 08:22:59.400968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.237 [2024-11-28 08:22:59.400977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.237 [2024-11-28 08:22:59.400984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.237 [2024-11-28 08:22:59.400994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.237 [2024-11-28 08:22:59.401001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.237 [2024-11-28 08:22:59.401010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.237 [2024-11-28 08:22:59.401018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.237 [2024-11-28 08:22:59.401029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.237 [2024-11-28 08:22:59.401037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.237 [2024-11-28 08:22:59.401047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.237 [2024-11-28 08:22:59.401055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.237 [2024-11-28 08:22:59.401064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.237 [2024-11-28 08:22:59.401071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.237 [2024-11-28 08:22:59.401080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.237 [2024-11-28 08:22:59.401088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.237 [2024-11-28 08:22:59.401097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.238 [2024-11-28 08:22:59.401104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.238 [2024-11-28 08:22:59.401113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.238 [2024-11-28 08:22:59.401120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.238 [2024-11-28 08:22:59.401130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.238 [2024-11-28 08:22:59.401137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.238 [2024-11-28 08:22:59.401146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.238 [2024-11-28 08:22:59.401154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.238 [2024-11-28 08:22:59.401170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.238 [2024-11-28 08:22:59.401177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.238 [2024-11-28 08:22:59.401187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.238 [2024-11-28 08:22:59.401194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.238 [2024-11-28 08:22:59.401204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.238 [2024-11-28 08:22:59.401211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.238 [2024-11-28 08:22:59.401221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.238 [2024-11-28 08:22:59.401228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.238 [2024-11-28 08:22:59.401238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.238 [2024-11-28 08:22:59.401247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.238 [2024-11-28 08:22:59.401256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.238 [2024-11-28 08:22:59.401264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.238 [2024-11-28 08:22:59.401274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.238 [2024-11-28 08:22:59.401281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.238 [2024-11-28 08:22:59.401290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.238 [2024-11-28 08:22:59.401298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.238 [2024-11-28 08:22:59.401307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.238 [2024-11-28 08:22:59.401314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.238 [2024-11-28 08:22:59.401324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.238 [2024-11-28 08:22:59.401331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.238 [2024-11-28 08:22:59.401340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.238 [2024-11-28 08:22:59.401348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.238 [2024-11-28 08:22:59.401357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.238 [2024-11-28 08:22:59.401365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.238 [2024-11-28 08:22:59.401374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.238 [2024-11-28 08:22:59.401381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.238 [2024-11-28 08:22:59.401390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.238 [2024-11-28 08:22:59.401398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.238 [2024-11-28 08:22:59.401407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.238 [2024-11-28 08:22:59.401415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.238 [2024-11-28 08:22:59.401424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.238 [2024-11-28 08:22:59.401431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.238 [2024-11-28 08:22:59.401440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.238 [2024-11-28 08:22:59.401447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.238 [2024-11-28 08:22:59.401459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.238 [2024-11-28 08:22:59.401466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.238 [2024-11-28 08:22:59.401475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.238 [2024-11-28 08:22:59.401482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.238 [2024-11-28 08:22:59.401492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.238 [2024-11-28 08:22:59.401499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.238 [2024-11-28 08:22:59.401508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.238 [2024-11-28 08:22:59.401515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.238 [2024-11-28 08:22:59.401525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.238 [2024-11-28 08:22:59.401532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.238 [2024-11-28 08:22:59.401541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.238 [2024-11-28 08:22:59.401548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.238 [2024-11-28 08:22:59.401557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.238 [2024-11-28 08:22:59.401565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.238 [2024-11-28 08:22:59.401574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.238 [2024-11-28 08:22:59.401581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.238 [2024-11-28 08:22:59.401590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.238 [2024-11-28 08:22:59.401598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.238 [2024-11-28 08:22:59.401607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.238 [2024-11-28 08:22:59.401614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.239 [2024-11-28 08:22:59.401623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.239 [2024-11-28 08:22:59.401631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.239 [2024-11-28 08:22:59.401640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.239 [2024-11-28 08:22:59.401647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.239 [2024-11-28 08:22:59.401656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.239 [2024-11-28 08:22:59.401665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.239 [2024-11-28 08:22:59.401674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.239 [2024-11-28 08:22:59.401681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.239 [2024-11-28 08:22:59.401690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.239 [2024-11-28 08:22:59.401698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.239 [2024-11-28 08:22:59.401707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.239 [2024-11-28 08:22:59.401714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.239 [2024-11-28 08:22:59.401723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.239 [2024-11-28 08:22:59.401731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.239 [2024-11-28 08:22:59.401741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.239 [2024-11-28 08:22:59.401748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.239 [2024-11-28 08:22:59.401757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.239 [2024-11-28 08:22:59.401765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.239 [2024-11-28 08:22:59.401774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.239 [2024-11-28 08:22:59.401781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.239 [2024-11-28 08:22:59.401791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.239 [2024-11-28 08:22:59.401798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.239 [2024-11-28 08:22:59.401826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:02.239 [2024-11-28 08:22:59.401901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.239 [2024-11-28 08:22:59.401911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.239 [2024-11-28 08:22:59.401923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.239 [2024-11-28 08:22:59.401931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.239 [2024-11-28 08:22:59.401941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.239 [2024-11-28 08:22:59.401948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.239 [2024-11-28 08:22:59.401957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.239 [2024-11-28 08:22:59.401967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.239 [2024-11-28 08:22:59.401977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.239 [2024-11-28 08:22:59.401985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.239 [2024-11-28 08:22:59.401994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.239 [2024-11-28 08:22:59.402001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.239 [2024-11-28 08:22:59.402011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.239 [2024-11-28 08:22:59.402018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.239 [2024-11-28 08:22:59.402028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.239 [2024-11-28 08:22:59.402035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.239 [2024-11-28 08:22:59.402044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.239 [2024-11-28 08:22:59.402051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.239 [2024-11-28 08:22:59.402061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.239 [2024-11-28 08:22:59.402069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.239 [2024-11-28 08:22:59.402078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.239 [2024-11-28 08:22:59.402085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.239 [2024-11-28 08:22:59.402095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.239 [2024-11-28 08:22:59.402102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.239 [2024-11-28 08:22:59.402111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.239 [2024-11-28 08:22:59.402119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.239 [2024-11-28 08:22:59.402128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.239 [2024-11-28 08:22:59.402135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.239 [2024-11-28 08:22:59.402145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.239 [2024-11-28 08:22:59.402152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.240 [2024-11-28 08:22:59.402167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.240 [2024-11-28 08:22:59.402175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.240 [2024-11-28 08:22:59.402186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.240 [2024-11-28 08:22:59.402198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.240 [2024-11-28 08:22:59.402208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.240 [2024-11-28 08:22:59.402215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.240 [2024-11-28 08:22:59.402225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.240 [2024-11-28 08:22:59.402232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.240 [2024-11-28 08:22:59.402241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.240 [2024-11-28 08:22:59.402248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.240 [2024-11-28 08:22:59.402258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.240 [2024-11-28 08:22:59.402265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.240 [2024-11-28 08:22:59.402275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.240 [2024-11-28 08:22:59.402282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.240 [2024-11-28 08:22:59.402292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.240 [2024-11-28 08:22:59.402299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.240 [2024-11-28 08:22:59.402308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.240 [2024-11-28 08:22:59.402315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.240 [2024-11-28 08:22:59.402325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.240 [2024-11-28 08:22:59.402332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.240 [2024-11-28 08:22:59.402342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.240 [2024-11-28 08:22:59.402349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.240 [2024-11-28 08:22:59.402359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.240 [2024-11-28 08:22:59.402366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.240 [2024-11-28 08:22:59.402375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.240 [2024-11-28 08:22:59.402383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.240 [2024-11-28 08:22:59.402395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.240 [2024-11-28 08:22:59.402403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.240 [2024-11-28 08:22:59.402413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.240 [2024-11-28 08:22:59.415024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.240 [2024-11-28 08:22:59.415077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.240 [2024-11-28 08:22:59.415087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.240 [2024-11-28 08:22:59.415097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.240 [2024-11-28 08:22:59.415105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.240 [2024-11-28 08:22:59.415115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.240 [2024-11-28 08:22:59.415123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.240 [2024-11-28 08:22:59.415133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.240 [2024-11-28 08:22:59.415140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.240 [2024-11-28 08:22:59.415150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.240 [2024-11-28 08:22:59.415157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.240 [2024-11-28 08:22:59.415175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.240 [2024-11-28 08:22:59.415183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.240 [2024-11-28 08:22:59.415192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.240 [2024-11-28 08:22:59.415200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.240 [2024-11-28 08:22:59.415209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.240 [2024-11-28 08:22:59.415217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.240 [2024-11-28 08:22:59.415226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.240 [2024-11-28 08:22:59.415233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.240 [2024-11-28 08:22:59.415243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.240 [2024-11-28 08:22:59.415250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.240 [2024-11-28 08:22:59.415260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.240 [2024-11-28 08:22:59.415267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.240 [2024-11-28 08:22:59.415282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.240 [2024-11-28 08:22:59.415290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.240 [2024-11-28 08:22:59.415299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.240 [2024-11-28 08:22:59.415307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.240 [2024-11-28 08:22:59.415316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.240 [2024-11-28 08:22:59.415324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.240 [2024-11-28 08:22:59.415333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.240 [2024-11-28 08:22:59.415341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.240 [2024-11-28 08:22:59.415350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.241 [2024-11-28 08:22:59.415358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.241 [2024-11-28 08:22:59.415367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.241 [2024-11-28 08:22:59.415375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.241 [2024-11-28 08:22:59.415384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.241 [2024-11-28 08:22:59.415391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.241 [2024-11-28 08:22:59.415401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.241 [2024-11-28 08:22:59.415410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.241 [2024-11-28 08:22:59.415419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.241 [2024-11-28 08:22:59.415427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.241 [2024-11-28 08:22:59.415436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.241 [2024-11-28 08:22:59.415443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.241 [2024-11-28 08:22:59.415453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.241 [2024-11-28 08:22:59.415461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.241 [2024-11-28 08:22:59.415470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.241 [2024-11-28 08:22:59.415477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.241 [2024-11-28 08:22:59.415487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.241 [2024-11-28 08:22:59.415496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.241 [2024-11-28 08:22:59.415506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.241 [2024-11-28 08:22:59.415514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.241 [2024-11-28 08:22:59.415524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.241 [2024-11-28 08:22:59.415531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.241 [2024-11-28 08:22:59.415541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.241 [2024-11-28 08:22:59.415548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.241 [2024-11-28 08:22:59.415558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.241 [2024-11-28 08:22:59.415565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.241 [2024-11-28 08:22:59.415574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.241 [2024-11-28 08:22:59.415582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.241 [2024-11-28 08:22:59.415591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.241 [2024-11-28 08:22:59.415599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.241 [2024-11-28 08:22:59.415609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.241 [2024-11-28 08:22:59.415616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.241 [2024-11-28 08:22:59.415626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.241 [2024-11-28 08:22:59.415634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.241 [2024-11-28 08:22:59.415643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.241 [2024-11-28 08:22:59.415650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.241 [2024-11-28 08:22:59.415660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.241 [2024-11-28 08:22:59.415667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.241 [2024-11-28 08:22:59.415676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f876e0 is same with the state(6) to be set 00:24:02.241 [2024-11-28 08:22:59.416090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.241 [2024-11-28 08:22:59.416111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.241 [2024-11-28 08:22:59.416128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.241 [2024-11-28 08:22:59.416140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.241 [2024-11-28 08:22:59.416150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.241 [2024-11-28 08:22:59.416164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.241 [2024-11-28 08:22:59.416175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.241 [2024-11-28 08:22:59.416182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.241 [2024-11-28 08:22:59.416191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.241 [2024-11-28 08:22:59.416199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.241 [2024-11-28 08:22:59.416209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.241 [2024-11-28 08:22:59.416216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.241 [2024-11-28 08:22:59.416226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.241 [2024-11-28 08:22:59.416233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.241 [2024-11-28 08:22:59.416242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.241 [2024-11-28 08:22:59.416250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.241 [2024-11-28 08:22:59.416259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.241 [2024-11-28 08:22:59.416266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.241 [2024-11-28 08:22:59.416276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.241 [2024-11-28 08:22:59.416283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.241 [2024-11-28 08:22:59.416293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.241 [2024-11-28 08:22:59.416300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.241 [2024-11-28 08:22:59.416310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.241 [2024-11-28 08:22:59.416317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.241 [2024-11-28 08:22:59.416327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.242 [2024-11-28 08:22:59.416334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.242 [2024-11-28 08:22:59.416343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.242 [2024-11-28 08:22:59.416351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.242 [2024-11-28 08:22:59.416362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.242 [2024-11-28 08:22:59.416370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.242 [2024-11-28 08:22:59.416379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.242 [2024-11-28 08:22:59.416387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.242 [2024-11-28 08:22:59.416396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.242 [2024-11-28 08:22:59.416403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.242 [2024-11-28 08:22:59.416413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.242 [2024-11-28 08:22:59.416420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.242 [2024-11-28 08:22:59.416430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.242 [2024-11-28 08:22:59.416437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.242 [2024-11-28 08:22:59.416446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.242 [2024-11-28 08:22:59.416453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.242 [2024-11-28 08:22:59.416462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.242 [2024-11-28 08:22:59.416470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.242 [2024-11-28 08:22:59.416479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.242 [2024-11-28 08:22:59.416486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.242 [2024-11-28 08:22:59.416496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.242 [2024-11-28 08:22:59.416503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.242 [2024-11-28 08:22:59.416512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.242 [2024-11-28 08:22:59.416520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.242 [2024-11-28 08:22:59.416529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.242 [2024-11-28 08:22:59.416536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.242 [2024-11-28 08:22:59.416545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.242 [2024-11-28 08:22:59.416553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.242 [2024-11-28 08:22:59.416562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.242 [2024-11-28 08:22:59.416571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.242 [2024-11-28 08:22:59.416580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.242 [2024-11-28 08:22:59.416588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.242 [2024-11-28 08:22:59.416598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.242 [2024-11-28 08:22:59.416605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.242 [2024-11-28 08:22:59.416614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.242 [2024-11-28 08:22:59.416622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.242 [2024-11-28 08:22:59.416631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.242 [2024-11-28 08:22:59.416639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.242 [2024-11-28 08:22:59.416648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.242 [2024-11-28 08:22:59.416655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.242 [2024-11-28 08:22:59.416665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.242 [2024-11-28 08:22:59.416672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.242 [2024-11-28 08:22:59.416681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.242 [2024-11-28 08:22:59.416689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.242 [2024-11-28 08:22:59.416698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.242 [2024-11-28 08:22:59.416705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.242 [2024-11-28 08:22:59.416715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.242 [2024-11-28 08:22:59.416722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.242 [2024-11-28 08:22:59.416731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.242 [2024-11-28 08:22:59.416738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.242 [2024-11-28 08:22:59.416748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.242 [2024-11-28 08:22:59.416755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.242 [2024-11-28 08:22:59.416764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.242 [2024-11-28 08:22:59.416772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.242 [2024-11-28 08:22:59.416786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.242 [2024-11-28 08:22:59.416793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.242 [2024-11-28 08:22:59.416802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.242 [2024-11-28 08:22:59.416810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.242 [2024-11-28 08:22:59.416819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.242 [2024-11-28 08:22:59.416826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.242 [2024-11-28 08:22:59.416835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.242 [2024-11-28 08:22:59.416843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.242 [2024-11-28 08:22:59.416852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.242 [2024-11-28 08:22:59.416859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.242 [2024-11-28 08:22:59.416869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.243 [2024-11-28 08:22:59.416876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.243 [2024-11-28 08:22:59.416885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.243 [2024-11-28 08:22:59.416893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.243 [2024-11-28 08:22:59.416902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.243 [2024-11-28 08:22:59.416909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.243 [2024-11-28 08:22:59.416918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.243 [2024-11-28 08:22:59.416926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.243 [2024-11-28 08:22:59.416936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.243 [2024-11-28 08:22:59.416943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.243 [2024-11-28 08:22:59.416953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.243 [2024-11-28 08:22:59.416960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.243 [2024-11-28 08:22:59.416969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.243 [2024-11-28 08:22:59.416977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.243 [2024-11-28 08:22:59.416986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.243 [2024-11-28 08:22:59.416995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.243 [2024-11-28 08:22:59.417004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.243 [2024-11-28 08:22:59.417011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.243 [2024-11-28 08:22:59.417020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.243 [2024-11-28 08:22:59.417028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.243 [2024-11-28 08:22:59.417037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.243 [2024-11-28 08:22:59.417045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.243 [2024-11-28 08:22:59.417054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.243 [2024-11-28 08:22:59.417061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.243 [2024-11-28 08:22:59.417070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.243 [2024-11-28 08:22:59.417078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.243 [2024-11-28 08:22:59.417087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.243 [2024-11-28 08:22:59.417095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.243 [2024-11-28 08:22:59.417104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.243 [2024-11-28 08:22:59.417111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.243 [2024-11-28 08:22:59.417120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.243 [2024-11-28 08:22:59.417128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.243 [2024-11-28 08:22:59.417138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.243 [2024-11-28 08:22:59.417145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.243 [2024-11-28 08:22:59.417155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.243 [2024-11-28 08:22:59.417165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.243 [2024-11-28 08:22:59.417175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.243 [2024-11-28 08:22:59.417182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.243 [2024-11-28 08:22:59.417191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.243 [2024-11-28 08:22:59.417199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.243 [2024-11-28 08:22:59.417320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9c610 (9): Bad file descriptor 00:24:02.243 [2024-11-28 08:22:59.417340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa56a0 (9): Bad file descriptor 00:24:02.243 [2024-11-28 08:22:59.417353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ffcbb0 (9): Bad file descriptor 00:24:02.243 [2024-11-28 08:22:59.417369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feed50 (9): Bad file descriptor 00:24:02.243 [2024-11-28 08:22:59.417381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b84850 (9): Bad file descriptor 00:24:02.243 [2024-11-28 08:22:59.417398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb0980 (9): Bad file descriptor 00:24:02.243 [2024-11-28 08:22:59.417412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa5c90 (9): Bad file descriptor 00:24:02.243 [2024-11-28 08:22:59.417429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b84cc0 (9): Bad file descriptor 00:24:02.243 [2024-11-28 08:22:59.417444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b82fc0 (9): Bad file descriptor 00:24:02.243 [2024-11-28 08:22:59.417458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb0180 (9): Bad file descriptor 00:24:02.243 [2024-11-28 08:22:59.421344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:24:02.243 [2024-11-28 08:22:59.421731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:24:02.243 [2024-11-28 08:22:59.421761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:24:02.243 [2024-11-28 08:22:59.422205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.243 [2024-11-28 08:22:59.422232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b82fc0 with addr=10.0.0.2, port=4420 00:24:02.243 [2024-11-28 08:22:59.422243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b82fc0 is same with the state(6) to be set 00:24:02.243 [2024-11-28 08:22:59.422900] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:02.243 [2024-11-28 08:22:59.422950] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:02.243 [2024-11-28 08:22:59.422988] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:02.243 [2024-11-28 08:22:59.423025] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:02.243 [2024-11-28 08:22:59.423686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.243 [2024-11-28 08:22:59.423703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb0980 with addr=10.0.0.2, port=4420 00:24:02.243 [2024-11-28 08:22:59.423711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb0980 is same with the state(6) to be set 00:24:02.243 [2024-11-28 08:22:59.424048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.243 [2024-11-28 08:22:59.424059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feed50 with addr=10.0.0.2, port=4420 00:24:02.243 [2024-11-28 08:22:59.424066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feed50 is same with the state(6) to be set 00:24:02.243 [2024-11-28 08:22:59.424077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b82fc0 (9): Bad file descriptor 00:24:02.244 [2024-11-28 08:22:59.424118] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:02.244 [2024-11-28 08:22:59.424163] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:02.244 [2024-11-28 08:22:59.424242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb0980 (9): Bad file descriptor 00:24:02.244 [2024-11-28 08:22:59.424254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feed50 (9): Bad file descriptor 00:24:02.244 [2024-11-28 08:22:59.424268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:24:02.244 [2024-11-28 08:22:59.424274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:24:02.244 [2024-11-28 08:22:59.424283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:24:02.244 [2024-11-28 08:22:59.424292] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:24:02.244 [2024-11-28 08:22:59.424370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.244 [2024-11-28 08:22:59.424381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.244 [2024-11-28 08:22:59.424396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.244 [2024-11-28 08:22:59.424404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.244 [2024-11-28 08:22:59.424414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.244 [2024-11-28 08:22:59.424421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.244 [2024-11-28 08:22:59.424431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.244 [2024-11-28 08:22:59.424438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.244 [2024-11-28 08:22:59.424448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.244 [2024-11-28 08:22:59.424456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.244 [2024-11-28 08:22:59.424466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.244 [2024-11-28 08:22:59.424473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.244 [2024-11-28 08:22:59.424482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.244 [2024-11-28 08:22:59.424490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.244 [2024-11-28 08:22:59.424500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.244 [2024-11-28 08:22:59.424507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.244 [2024-11-28 08:22:59.424517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.244 [2024-11-28 08:22:59.424524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.244 [2024-11-28 08:22:59.424534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.244 [2024-11-28 08:22:59.424541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.244 [2024-11-28 08:22:59.424551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.244 [2024-11-28 08:22:59.424562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.244 [2024-11-28 08:22:59.424571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.244 [2024-11-28 08:22:59.424579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.244 [2024-11-28 08:22:59.424588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.244 [2024-11-28 08:22:59.424595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.244 [2024-11-28 08:22:59.424605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.244 [2024-11-28 08:22:59.424612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.244 [2024-11-28 08:22:59.424622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.244 [2024-11-28 08:22:59.424629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.244 [2024-11-28 08:22:59.424638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.244 [2024-11-28 08:22:59.424646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.244 [2024-11-28 08:22:59.424656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.244 [2024-11-28 08:22:59.424663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.244 [2024-11-28 08:22:59.424673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.244 [2024-11-28 08:22:59.424680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.244 [2024-11-28 08:22:59.424689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.244 [2024-11-28 08:22:59.424697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.244 [2024-11-28 08:22:59.424706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.244 [2024-11-28 08:22:59.424713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.244 [2024-11-28 08:22:59.424723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.244 [2024-11-28 08:22:59.424730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.244 [2024-11-28 08:22:59.424739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.244 [2024-11-28 08:22:59.424747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.244 [2024-11-28 08:22:59.424756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.244 [2024-11-28 08:22:59.424763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.244 [2024-11-28 08:22:59.424774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.244 [2024-11-28 08:22:59.424781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.244 [2024-11-28 08:22:59.424792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.244 [2024-11-28 08:22:59.424799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.244 [2024-11-28 08:22:59.424809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.244 [2024-11-28 08:22:59.424816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.244 [2024-11-28 08:22:59.424825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.244 [2024-11-28 08:22:59.424833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.244 [2024-11-28 08:22:59.424842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.244 [2024-11-28 08:22:59.424849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.244 [2024-11-28 08:22:59.424859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.244 [2024-11-28 08:22:59.424866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.244 [2024-11-28 08:22:59.424876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.244 [2024-11-28 08:22:59.424883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.245 [2024-11-28 08:22:59.424893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.245 [2024-11-28 08:22:59.424900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.245 [2024-11-28 08:22:59.424910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.245 [2024-11-28 08:22:59.424917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.245 [2024-11-28 08:22:59.424926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.245 [2024-11-28 08:22:59.424934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.245 [2024-11-28 08:22:59.424943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.245 [2024-11-28 08:22:59.424950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.245 [2024-11-28 08:22:59.424960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.245 [2024-11-28 08:22:59.424967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.245 [2024-11-28 08:22:59.424977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.245 [2024-11-28 08:22:59.424986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.245 [2024-11-28 08:22:59.424995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.245 [2024-11-28 08:22:59.425002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.245 [2024-11-28 08:22:59.425013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.245 [2024-11-28 08:22:59.425020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.245 [2024-11-28 08:22:59.425030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.245 [2024-11-28 08:22:59.425037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.245 [2024-11-28 08:22:59.425047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.245 [2024-11-28 08:22:59.425054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.245 [2024-11-28 08:22:59.425064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.245 [2024-11-28 08:22:59.425072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.245 [2024-11-28 08:22:59.425081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.245 [2024-11-28 08:22:59.425089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.245 [2024-11-28 08:22:59.425099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.245 [2024-11-28 08:22:59.425106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.245 [2024-11-28 08:22:59.425116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.245 [2024-11-28 08:22:59.425124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.245 [2024-11-28 08:22:59.425134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.245 [2024-11-28 08:22:59.425141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.245 [2024-11-28 08:22:59.425150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.245 [2024-11-28 08:22:59.425163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.245 [2024-11-28 08:22:59.425172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.245 [2024-11-28 08:22:59.425180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.245 [2024-11-28 08:22:59.425189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.245 [2024-11-28 08:22:59.425197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.245 [2024-11-28 08:22:59.425208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.245 [2024-11-28 08:22:59.425215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.245 [2024-11-28 08:22:59.425225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.245 [2024-11-28 08:22:59.425232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.245 [2024-11-28 08:22:59.425242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.245 [2024-11-28 08:22:59.425249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.245 [2024-11-28 08:22:59.425259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.245 [2024-11-28 08:22:59.425267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.245 [2024-11-28 08:22:59.425276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.245 [2024-11-28 08:22:59.425283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.245 [2024-11-28 08:22:59.425293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.245 [2024-11-28 08:22:59.425301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.245 [2024-11-28 08:22:59.425310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.245 [2024-11-28 08:22:59.425318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.245 [2024-11-28 08:22:59.425327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.245 [2024-11-28 08:22:59.425334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.245 [2024-11-28 08:22:59.425344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.245 [2024-11-28 08:22:59.425351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.245 [2024-11-28 08:22:59.425360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.245 [2024-11-28 08:22:59.425368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.245 [2024-11-28 08:22:59.425377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.245 [2024-11-28 08:22:59.425384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.246 [2024-11-28 08:22:59.425394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.246 [2024-11-28 08:22:59.425401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.246 [2024-11-28 08:22:59.425410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.246 [2024-11-28 08:22:59.425419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.246 [2024-11-28 08:22:59.425429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.246 [2024-11-28 08:22:59.425436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.246 [2024-11-28 08:22:59.425445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.246 [2024-11-28 08:22:59.425453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.246 [2024-11-28 08:22:59.425462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.246 [2024-11-28 08:22:59.425470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.246 [2024-11-28 08:22:59.425478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e2ed90 is same with the state(6) to be set 00:24:02.246 [2024-11-28 08:22:59.425539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:24:02.246 [2024-11-28 08:22:59.425547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:24:02.246 [2024-11-28 08:22:59.425555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:24:02.246 [2024-11-28 08:22:59.425562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:24:02.246 [2024-11-28 08:22:59.425570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:24:02.246 [2024-11-28 08:22:59.425577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:24:02.246 [2024-11-28 08:22:59.425584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:24:02.246 [2024-11-28 08:22:59.425590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:24:02.246 [2024-11-28 08:22:59.426842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:24:02.246 [2024-11-28 08:22:59.427376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.246 [2024-11-28 08:22:59.427415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffcbb0 with addr=10.0.0.2, port=4420 00:24:02.246 [2024-11-28 08:22:59.427427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffcbb0 is same with the state(6) to be set 00:24:02.246 [2024-11-28 08:22:59.427749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ffcbb0 (9): Bad file descriptor 00:24:02.246 [2024-11-28 08:22:59.427875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:24:02.246 [2024-11-28 08:22:59.427886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:24:02.246 [2024-11-28 08:22:59.427894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:24:02.246 [2024-11-28 08:22:59.427902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:24:02.246 [2024-11-28 08:22:59.427940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.246 [2024-11-28 08:22:59.427951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.246 [2024-11-28 08:22:59.427966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.246 [2024-11-28 08:22:59.427979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.246 [2024-11-28 08:22:59.427989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.246 [2024-11-28 08:22:59.427997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.246 [2024-11-28 08:22:59.428007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.246 [2024-11-28 08:22:59.428015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.246 [2024-11-28 08:22:59.428024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.246 [2024-11-28 08:22:59.428031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.246 [2024-11-28 08:22:59.428041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.246 [2024-11-28 08:22:59.428048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.246 [2024-11-28 08:22:59.428058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.246 [2024-11-28 08:22:59.428065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.246 [2024-11-28 08:22:59.428075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.246 [2024-11-28 08:22:59.428082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.246 [2024-11-28 08:22:59.428092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.246 [2024-11-28 08:22:59.428099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.246 [2024-11-28 08:22:59.428109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.246 [2024-11-28 08:22:59.428116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.246 [2024-11-28 08:22:59.428126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.246 [2024-11-28 08:22:59.428133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.246 [2024-11-28 08:22:59.428143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.246 [2024-11-28 08:22:59.428150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.246 [2024-11-28 08:22:59.428169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.246 [2024-11-28 08:22:59.428177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.246 [2024-11-28 08:22:59.428187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.246 [2024-11-28 08:22:59.428194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.246 [2024-11-28 08:22:59.428205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.246 [2024-11-28 08:22:59.428213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.246 [2024-11-28 08:22:59.428223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.246 [2024-11-28 08:22:59.428230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.246 [2024-11-28 08:22:59.428240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.246 [2024-11-28 08:22:59.428247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.246 [2024-11-28 08:22:59.428257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.246 [2024-11-28 08:22:59.428264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.246 [2024-11-28 08:22:59.428274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.246 [2024-11-28 08:22:59.428281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.246 [2024-11-28 08:22:59.428291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.246 [2024-11-28 08:22:59.428298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.246 [2024-11-28 08:22:59.428308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.246 [2024-11-28 08:22:59.428315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.246 [2024-11-28 08:22:59.428324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.246 [2024-11-28 08:22:59.428332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.246 [2024-11-28 08:22:59.428341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.246 [2024-11-28 08:22:59.428349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.247 [2024-11-28 08:22:59.428358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.247 [2024-11-28 08:22:59.428365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.247 [2024-11-28 08:22:59.428375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.247 [2024-11-28 08:22:59.428382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.247 [2024-11-28 08:22:59.428392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.247 [2024-11-28 08:22:59.428399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.247 [2024-11-28 08:22:59.428408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.247 [2024-11-28 08:22:59.428417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.247 [2024-11-28 08:22:59.428427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.247 [2024-11-28 08:22:59.428435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.247 [2024-11-28 08:22:59.428445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.247 [2024-11-28 08:22:59.428452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.247 [2024-11-28 08:22:59.428461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.247 [2024-11-28 08:22:59.428468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.247 [2024-11-28 08:22:59.428478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.247 [2024-11-28 08:22:59.428485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.247 [2024-11-28 08:22:59.428495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.247 [2024-11-28 08:22:59.428502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.247 [2024-11-28 08:22:59.428512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.247 [2024-11-28 08:22:59.428519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.247 [2024-11-28 08:22:59.428529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.247 [2024-11-28 08:22:59.428536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.247 [2024-11-28 08:22:59.428546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.247 [2024-11-28 08:22:59.428553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.247 [2024-11-28 08:22:59.428563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.247 [2024-11-28 08:22:59.428570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.247 [2024-11-28 08:22:59.428580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.247 [2024-11-28 08:22:59.428587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.247 [2024-11-28 08:22:59.428596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.247 [2024-11-28 08:22:59.428604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.247 [2024-11-28 08:22:59.428613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.247 [2024-11-28 08:22:59.428621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.247 [2024-11-28 08:22:59.428632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.247 [2024-11-28 08:22:59.428639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.247 [2024-11-28 08:22:59.428649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.247 [2024-11-28 08:22:59.428656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.247 [2024-11-28 08:22:59.428666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.247 [2024-11-28 08:22:59.428673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.247 [2024-11-28 08:22:59.428683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.247 [2024-11-28 08:22:59.428690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.247 [2024-11-28 08:22:59.428699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.247 [2024-11-28 08:22:59.428707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.247 [2024-11-28 08:22:59.428716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.247 [2024-11-28 08:22:59.428723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.247 [2024-11-28 08:22:59.428733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.247 [2024-11-28 08:22:59.428740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.247 [2024-11-28 08:22:59.428750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.247 [2024-11-28 08:22:59.428757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.247 [2024-11-28 08:22:59.428767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.247 [2024-11-28 08:22:59.428774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.247 [2024-11-28 08:22:59.428783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.247 [2024-11-28 08:22:59.428791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.247 [2024-11-28 08:22:59.428800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.247 [2024-11-28 08:22:59.428808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.247 [2024-11-28 08:22:59.428817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.247 [2024-11-28 08:22:59.428824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.247 [2024-11-28 08:22:59.428834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.247 [2024-11-28 08:22:59.428843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.247 [2024-11-28 08:22:59.428852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.247 [2024-11-28 08:22:59.428860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.247 [2024-11-28 08:22:59.428869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.247 [2024-11-28 08:22:59.428877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.247 [2024-11-28 08:22:59.428886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.247 [2024-11-28 08:22:59.428893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.247 [2024-11-28 08:22:59.428903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.247 [2024-11-28 08:22:59.428910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.247 [2024-11-28 08:22:59.428920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.247 [2024-11-28 08:22:59.428927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.247 [2024-11-28 08:22:59.428937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.247 [2024-11-28 08:22:59.428945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.247 [2024-11-28 08:22:59.428955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.248 [2024-11-28 08:22:59.428962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.248 [2024-11-28 08:22:59.428972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.248 [2024-11-28 08:22:59.428980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.248 [2024-11-28 08:22:59.428989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.248 [2024-11-28 08:22:59.428997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.248 [2024-11-28 08:22:59.429006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.248 [2024-11-28 08:22:59.429014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.248 [2024-11-28 08:22:59.429023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.248 [2024-11-28 08:22:59.429030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.248 [2024-11-28 08:22:59.429040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.248 [2024-11-28 08:22:59.429047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.248 [2024-11-28 08:22:59.429060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d88cd0 is same with the state(6) to be set 00:24:02.248 [2024-11-28 08:22:59.430344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.248 [2024-11-28 08:22:59.430358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.248 [2024-11-28 08:22:59.430372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.248 [2024-11-28 08:22:59.430381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.248 [2024-11-28 08:22:59.430393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.248 [2024-11-28 08:22:59.430402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.248 [2024-11-28 08:22:59.430413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.248 [2024-11-28 08:22:59.430423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.248 [2024-11-28 08:22:59.430434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.248 [2024-11-28 08:22:59.430443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.248 [2024-11-28 08:22:59.430452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.248 [2024-11-28 08:22:59.430460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.248 [2024-11-28 08:22:59.430469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.248 [2024-11-28 08:22:59.430477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.248 [2024-11-28 08:22:59.430487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.248 [2024-11-28 08:22:59.430494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.248 [2024-11-28 08:22:59.430504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.248 [2024-11-28 08:22:59.430511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.248 [2024-11-28 08:22:59.430521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.248 [2024-11-28 08:22:59.430528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.248 [2024-11-28 08:22:59.430538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.248 [2024-11-28 08:22:59.430545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.248 [2024-11-28 08:22:59.430554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.248 [2024-11-28 08:22:59.430562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.248 [2024-11-28 08:22:59.430574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.248 [2024-11-28 08:22:59.430582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.248 [2024-11-28 08:22:59.430591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.248 [2024-11-28 08:22:59.430599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.248 [2024-11-28 08:22:59.430608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.248 [2024-11-28 08:22:59.430616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.248 [2024-11-28 08:22:59.430625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.248 [2024-11-28 08:22:59.430632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.248 [2024-11-28 08:22:59.430642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.248 [2024-11-28 08:22:59.430649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.248 [2024-11-28 08:22:59.430659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.248 [2024-11-28 08:22:59.430666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.248 [2024-11-28 08:22:59.430675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.248 [2024-11-28 08:22:59.430683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.248 [2024-11-28 08:22:59.430692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.248 [2024-11-28 08:22:59.430699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.248 [2024-11-28 08:22:59.430709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.248 [2024-11-28 08:22:59.430716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.248 [2024-11-28 08:22:59.430726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.248 [2024-11-28 08:22:59.430733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.248 [2024-11-28 08:22:59.430742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.248 [2024-11-28 08:22:59.430750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.248 [2024-11-28 08:22:59.430759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.248 [2024-11-28 08:22:59.430767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.248 [2024-11-28 08:22:59.430776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.248 [2024-11-28 08:22:59.430785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.248 [2024-11-28 08:22:59.430795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.248 [2024-11-28 08:22:59.430802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.248 [2024-11-28 08:22:59.430812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.248 [2024-11-28 08:22:59.430819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.248 [2024-11-28 08:22:59.430828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.248 [2024-11-28 08:22:59.430835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.248 [2024-11-28 08:22:59.430845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.249 [2024-11-28 08:22:59.430852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.249 [2024-11-28 08:22:59.430861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.249 [2024-11-28 08:22:59.430869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.249 [2024-11-28 08:22:59.430878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.249 [2024-11-28 08:22:59.430885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.249 [2024-11-28 08:22:59.430895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.249 [2024-11-28 08:22:59.430902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.249 [2024-11-28 08:22:59.430912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.249 [2024-11-28 08:22:59.430919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.249 [2024-11-28 08:22:59.430929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.249 [2024-11-28 08:22:59.430936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.249 [2024-11-28 08:22:59.430946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.249 [2024-11-28 08:22:59.430953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.249 [2024-11-28 08:22:59.430962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.249 [2024-11-28 08:22:59.430970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.249 [2024-11-28 08:22:59.430979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.249 [2024-11-28 08:22:59.430986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.249 [2024-11-28 08:22:59.430998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.249 [2024-11-28 08:22:59.431005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.249 [2024-11-28 08:22:59.431015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.249 [2024-11-28 08:22:59.431022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.249 [2024-11-28 08:22:59.431031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.249 [2024-11-28 08:22:59.431038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.249 [2024-11-28 08:22:59.431048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.249 [2024-11-28 08:22:59.431055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.249 [2024-11-28 08:22:59.431065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.249 [2024-11-28 08:22:59.431072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.249 [2024-11-28 08:22:59.431082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.249 [2024-11-28 08:22:59.431089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.249 [2024-11-28 08:22:59.431099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.249 [2024-11-28 08:22:59.431106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.249 [2024-11-28 08:22:59.431115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.249 [2024-11-28 08:22:59.431123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.249 [2024-11-28 08:22:59.431132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.249 [2024-11-28 08:22:59.431140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.249 [2024-11-28 08:22:59.431149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.249 [2024-11-28 08:22:59.431157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.249 [2024-11-28 08:22:59.431173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.249 [2024-11-28 08:22:59.431180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.249 [2024-11-28 08:22:59.431190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.249 [2024-11-28 08:22:59.431197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.249 [2024-11-28 08:22:59.431206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.249 [2024-11-28 08:22:59.431215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.249 [2024-11-28 08:22:59.431225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.249 [2024-11-28 08:22:59.431232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.249 [2024-11-28 08:22:59.431242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.249 [2024-11-28 08:22:59.431250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.249 [2024-11-28 08:22:59.431259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.249 [2024-11-28 08:22:59.431266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.249 [2024-11-28 08:22:59.431276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.249 [2024-11-28 08:22:59.431283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.249 [2024-11-28 08:22:59.431293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.249 [2024-11-28 08:22:59.431301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.249 [2024-11-28 08:22:59.431311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.249 [2024-11-28 08:22:59.431318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.249 [2024-11-28 08:22:59.431328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.249 [2024-11-28 08:22:59.431335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.249 [2024-11-28 08:22:59.431345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.249 [2024-11-28 08:22:59.431352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.249 [2024-11-28 08:22:59.431362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.249 [2024-11-28 08:22:59.431370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.249 [2024-11-28 08:22:59.431379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.249 [2024-11-28 08:22:59.431387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.249 [2024-11-28 08:22:59.431396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.249 [2024-11-28 08:22:59.431404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.249 [2024-11-28 08:22:59.431413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.249 [2024-11-28 08:22:59.431420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.249 [2024-11-28 08:22:59.431430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.249 [2024-11-28 08:22:59.431438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.249 [2024-11-28 08:22:59.431448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.249 [2024-11-28 08:22:59.431455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.249 [2024-11-28 08:22:59.431464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d89d40 is same with the state(6) to be set 00:24:02.250 [2024-11-28 08:22:59.432734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.250 [2024-11-28 08:22:59.432747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.250 [2024-11-28 08:22:59.432760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.250 [2024-11-28 08:22:59.432769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.250 [2024-11-28 08:22:59.432781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.250 [2024-11-28 08:22:59.432790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.250 [2024-11-28 08:22:59.432801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.250 [2024-11-28 08:22:59.432811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.250 [2024-11-28 08:22:59.432823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.250 [2024-11-28 08:22:59.432830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.250 [2024-11-28 08:22:59.432840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.250 [2024-11-28 08:22:59.432847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.250 [2024-11-28 08:22:59.432857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.250 [2024-11-28 08:22:59.432865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.250 [2024-11-28 08:22:59.432875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.250 [2024-11-28 08:22:59.432882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.250 [2024-11-28 08:22:59.432892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.250 [2024-11-28 08:22:59.432899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.250 [2024-11-28 08:22:59.432909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.250 [2024-11-28 08:22:59.432916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.250 [2024-11-28 08:22:59.432926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.250 [2024-11-28 08:22:59.432936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.250 [2024-11-28 08:22:59.432946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.250 [2024-11-28 08:22:59.432954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.250 [2024-11-28 08:22:59.432963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.250 [2024-11-28 08:22:59.432971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.250 [2024-11-28 08:22:59.432981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.250 [2024-11-28 08:22:59.432988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.250 [2024-11-28 08:22:59.432998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.250 [2024-11-28 08:22:59.433005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.250 [2024-11-28 08:22:59.433015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.250 [2024-11-28 08:22:59.433022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.250 [2024-11-28 08:22:59.433032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.250 [2024-11-28 08:22:59.433040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.250 [2024-11-28 08:22:59.433049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.250 [2024-11-28 08:22:59.433057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.250 [2024-11-28 08:22:59.433066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.250 [2024-11-28 08:22:59.433074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.250 [2024-11-28 08:22:59.433083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.250 [2024-11-28 08:22:59.433091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.250 [2024-11-28 08:22:59.433101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.250 [2024-11-28 08:22:59.433108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.250 [2024-11-28 08:22:59.433118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.250 [2024-11-28 08:22:59.433126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.250 [2024-11-28 08:22:59.433135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.250 [2024-11-28 08:22:59.433142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.250 [2024-11-28 08:22:59.433154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.250 [2024-11-28 08:22:59.433165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.250 [2024-11-28 08:22:59.433175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.250 [2024-11-28 08:22:59.433182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.250 [2024-11-28 08:22:59.433192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.250 [2024-11-28 08:22:59.433200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.250 [2024-11-28 08:22:59.433210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.250 [2024-11-28 08:22:59.433217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.250 [2024-11-28 08:22:59.433227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.250 [2024-11-28 08:22:59.433235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.250 [2024-11-28 08:22:59.433244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.250 [2024-11-28 08:22:59.433252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.250 [2024-11-28 08:22:59.433261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.250 [2024-11-28 08:22:59.433269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.250 [2024-11-28 08:22:59.433278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.250 [2024-11-28 08:22:59.433286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.250 [2024-11-28 08:22:59.433295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.433303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.251 [2024-11-28 08:22:59.433312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.433320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.251 [2024-11-28 08:22:59.433329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.433337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.251 [2024-11-28 08:22:59.433347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.433355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.251 [2024-11-28 08:22:59.433364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.433374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.251 [2024-11-28 08:22:59.433384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.433391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.251 [2024-11-28 08:22:59.433401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.433408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.251 [2024-11-28 08:22:59.433418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.433425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.251 [2024-11-28 08:22:59.433435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.433442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.251 [2024-11-28 08:22:59.433452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.433459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.251 [2024-11-28 08:22:59.433469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.433476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.251 [2024-11-28 08:22:59.433486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.433493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.251 [2024-11-28 08:22:59.433502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.433510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.251 [2024-11-28 08:22:59.433520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.433527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.251 [2024-11-28 08:22:59.433537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.433544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.251 [2024-11-28 08:22:59.433554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.433561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.251 [2024-11-28 08:22:59.433571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.433578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.251 [2024-11-28 08:22:59.433589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.433597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.251 [2024-11-28 08:22:59.433606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.433614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.251 [2024-11-28 08:22:59.433623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.433630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.251 [2024-11-28 08:22:59.433640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.433647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.251 [2024-11-28 08:22:59.433657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.433664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.251 [2024-11-28 08:22:59.433674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.433681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.251 [2024-11-28 08:22:59.433691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.433698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.251 [2024-11-28 08:22:59.433708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.433715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.251 [2024-11-28 08:22:59.433725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.433732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.251 [2024-11-28 08:22:59.433742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.433749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.251 [2024-11-28 08:22:59.433759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.433767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.251 [2024-11-28 08:22:59.433777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.433785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.251 [2024-11-28 08:22:59.433795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.433804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.251 [2024-11-28 08:22:59.433813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.433821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.251 [2024-11-28 08:22:59.433830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.433838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.251 [2024-11-28 08:22:59.433847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.433855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.251 [2024-11-28 08:22:59.433863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f889a0 is same with the state(6) to be set 00:24:02.251 [2024-11-28 08:22:59.435128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.435140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.251 [2024-11-28 08:22:59.435152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.435165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.251 [2024-11-28 08:22:59.435177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.435186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.251 [2024-11-28 08:22:59.435198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.251 [2024-11-28 08:22:59.435207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.252 [2024-11-28 08:22:59.435843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.252 [2024-11-28 08:22:59.435850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.435860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.435867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.435877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.435884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.435894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.435901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.435911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.435918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.435928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.435935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.435947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.435954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.435964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.435971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.435981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.435988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.435997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.436005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.436014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.436021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.436031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.436038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.436048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.436055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.436065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.436072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.436082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.436089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.436098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.436106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.436115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.436123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.436132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.436140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.436149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.436167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.436177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.436185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.436195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.436202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.436212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.436220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.436229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.436236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.436246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.436253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.436261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f89c60 is same with the state(6) to be set 00:24:02.253 [2024-11-28 08:22:59.437543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.437558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.437571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.437580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.437592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.437601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.437612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.437621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.437632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.437641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.437652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.437661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.437672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.437684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.437695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.437703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.437713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.437721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.437730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.437737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.437747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.437754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.437764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.437771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.437781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.437788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.437798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.437805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.437815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.437822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.253 [2024-11-28 08:22:59.437832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.253 [2024-11-28 08:22:59.437839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.437848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.437856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.437865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.437872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.437882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.437890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.437901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.437908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.437918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.437925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.437934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.437942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.437951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.437958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.437968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.437975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.437985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.437992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.438002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.438010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.438019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.438027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.438036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.438044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.438053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.438060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.438070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.438078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.438087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.438095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.438104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.438113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.438123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.438130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.438140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.438148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.438161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.438168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.438178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.438185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.438195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.438202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.438212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.438219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.438229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.438236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.438246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.438253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.438263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.438270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.438279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.438287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.438296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.438303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.438313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.438320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.438330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.438339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.438348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.438356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.438365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.438373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.438382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.438390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.438400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.438407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.438416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.438424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.438434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.438441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.438451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.438458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.438467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.438475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.438484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.438492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.438501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.438509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.254 [2024-11-28 08:22:59.438519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.254 [2024-11-28 08:22:59.438527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.438536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.438543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.438556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.438564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.438573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.438580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.438590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.438597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.438607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.438614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.438624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.438631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.438641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.438648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.438658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.438665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.438673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8af20 is same with the state(6) to be set 00:24:02.255 [2024-11-28 08:22:59.439938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.439952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.439966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.439976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.439987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.439996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.440008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.440017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.440028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.440037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.440049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.440057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.440066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.440074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.440083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.440090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.440100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.440107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.440117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.440124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.440134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.440141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.440151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.440164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.440174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.440181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.440190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.440198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.440207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.440215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.440224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.440232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.440241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.440248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.440258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.440267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.440276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.440284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.440294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.440301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.440311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.440319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.440329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.440336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.440345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.440352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.440362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.440369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.440378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.440386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.440395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.440403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.440412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.440419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.440429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.440436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.440446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.255 [2024-11-28 08:22:59.440453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.255 [2024-11-28 08:22:59.440463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.256 [2024-11-28 08:22:59.440470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.256 [2024-11-28 08:22:59.440481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.256 [2024-11-28 08:22:59.440489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.256 [2024-11-28 08:22:59.440498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.256 [2024-11-28 08:22:59.440505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.256 [2024-11-28 08:22:59.440515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.256 [2024-11-28 08:22:59.440522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.256 [2024-11-28 08:22:59.440532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.256 [2024-11-28 08:22:59.440539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.256 [2024-11-28 08:22:59.440549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.256 [2024-11-28 08:22:59.440556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.256 [2024-11-28 08:22:59.440566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.256 [2024-11-28 08:22:59.440573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.256 [2024-11-28 08:22:59.440583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.256 [2024-11-28 08:22:59.440590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.256 [2024-11-28 08:22:59.440600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.256 [2024-11-28 08:22:59.440607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.256 [2024-11-28 08:22:59.440617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.256 [2024-11-28 08:22:59.440624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.256 [2024-11-28 08:22:59.440634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.256 [2024-11-28 08:22:59.440641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.256 [2024-11-28 08:22:59.440650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.256 [2024-11-28 08:22:59.440657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.256 [2024-11-28 08:22:59.440667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.256 [2024-11-28 08:22:59.440675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.256 [2024-11-28 08:22:59.440684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.256 [2024-11-28 08:22:59.440693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.256 [2024-11-28 08:22:59.440702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.256 [2024-11-28 08:22:59.440710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.256 [2024-11-28 08:22:59.440719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.256 [2024-11-28 08:22:59.440727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.256 [2024-11-28 08:22:59.440736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.256 [2024-11-28 08:22:59.440743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.256 [2024-11-28 08:22:59.440753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.256 [2024-11-28 08:22:59.440761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.256 [2024-11-28 08:22:59.440770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.256 [2024-11-28 08:22:59.440777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.256 [2024-11-28 08:22:59.440787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.256 [2024-11-28 08:22:59.440794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.256 [2024-11-28 08:22:59.440804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.256 [2024-11-28 08:22:59.440811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.256 [2024-11-28 08:22:59.440820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.256 [2024-11-28 08:22:59.440828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.256 [2024-11-28 08:22:59.440837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.256 [2024-11-28 08:22:59.440845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.256 [2024-11-28 08:22:59.440854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.256 [2024-11-28 08:22:59.440862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.256 [2024-11-28 08:22:59.440871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.256 [2024-11-28 08:22:59.440878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.256 [2024-11-28 08:22:59.440888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.256 [2024-11-28 08:22:59.440896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.256 [2024-11-28 08:22:59.440907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.256 [2024-11-28 08:22:59.440914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.256 [2024-11-28 08:22:59.440924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.256 [2024-11-28 08:22:59.440932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.256 [2024-11-28 08:22:59.440941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.256 [2024-11-28 08:22:59.440949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.256 [2024-11-28 08:22:59.440958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.256 [2024-11-28 08:22:59.440965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.256 [2024-11-28 08:22:59.440975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.256 [2024-11-28 08:22:59.440982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.256 [2024-11-28 08:22:59.440992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.256 [2024-11-28 08:22:59.440999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.256 [2024-11-28 08:22:59.441009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.256 [2024-11-28 08:22:59.441016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.256 [2024-11-28 08:22:59.441026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.256 [2024-11-28 08:22:59.441033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.256 [2024-11-28 08:22:59.441042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.256 [2024-11-28 08:22:59.441050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.256 [2024-11-28 08:22:59.441058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8c1e0 is same with the state(6) to be set 00:24:02.256 [2024-11-28 08:22:59.442604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:02.256 [2024-11-28 08:22:59.442631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:24:02.256 [2024-11-28 08:22:59.442642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:24:02.256 [2024-11-28 08:22:59.442653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:24:02.256 [2024-11-28 08:22:59.442739] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:24:02.256 [2024-11-28 08:22:59.442754] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:24:02.256 [2024-11-28 08:22:59.443045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:24:02.256 task offset: 30848 on job bdev=Nvme3n1 fails 00:24:02.256 00:24:02.256 Latency(us) 00:24:02.256 [2024-11-28T07:22:59.545Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:02.256 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:02.256 Job: Nvme1n1 ended in about 0.86 seconds with error 00:24:02.256 Verification LBA range: start 0x0 length 0x400 00:24:02.256 Nvme1n1 : 0.86 149.29 9.33 74.64 0.00 282369.99 18568.53 272629.76 00:24:02.256 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:02.256 Job: Nvme2n1 ended in about 0.86 seconds with error 00:24:02.256 Verification LBA range: start 0x0 length 0x400 00:24:02.256 Nvme2n1 : 0.86 157.02 9.81 74.44 0.00 266930.21 28398.93 235929.60 00:24:02.256 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:02.256 Job: Nvme3n1 ended in about 0.85 seconds with error 00:24:02.256 Verification LBA range: start 0x0 length 0x400 00:24:02.256 Nvme3n1 : 0.85 228.15 14.26 75.66 0.00 198306.38 6826.67 246415.36 00:24:02.256 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:02.256 Job: Nvme4n1 ended in about 0.85 seconds with error 00:24:02.256 Verification LBA range: start 0x0 length 0x400 00:24:02.257 Nvme4n1 : 0.85 226.66 14.17 75.55 0.00 194531.84 17803.95 248162.99 00:24:02.257 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:02.257 Job: Nvme5n1 ended in about 0.86 seconds with error 00:24:02.257 Verification LBA range: start 0x0 length 0x400 00:24:02.257 Nvme5n1 : 0.86 148.46 9.28 74.23 0.00 257969.21 15728.64 251658.24 00:24:02.257 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:02.257 Job: Nvme6n1 ended in about 0.86 seconds with error 00:24:02.257 Verification LBA range: start 0x0 length 0x400 00:24:02.257 Nvme6n1 : 0.86 148.05 9.25 74.02 0.00 252359.68 17694.72 246415.36 00:24:02.257 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:02.257 Job: Nvme7n1 ended in about 0.87 seconds with error 00:24:02.257 Verification LBA range: start 0x0 length 0x400 00:24:02.257 Nvme7n1 : 0.87 147.64 9.23 73.82 0.00 246701.80 20971.52 272629.76 00:24:02.257 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:02.257 Job: Nvme8n1 ended in about 0.87 seconds with error 00:24:02.257 Verification LBA range: start 0x0 length 0x400 00:24:02.257 Nvme8n1 : 0.87 147.23 9.20 73.62 0.00 240954.31 25995.95 223696.21 00:24:02.257 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:02.257 Job: Nvme9n1 ended in about 0.85 seconds with error 00:24:02.257 Verification LBA range: start 0x0 length 0x400 00:24:02.257 Nvme9n1 : 0.85 149.89 9.37 74.94 0.00 229321.10 24139.09 255153.49 00:24:02.257 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:02.257 Job: Nvme10n1 ended in about 0.85 seconds with error 00:24:02.257 Verification LBA range: start 0x0 length 0x400 00:24:02.257 Nvme10n1 : 0.85 150.86 9.43 75.43 0.00 221122.28 21080.75 272629.76 00:24:02.257 [2024-11-28T07:22:59.546Z] =================================================================================================================== 00:24:02.257 [2024-11-28T07:22:59.546Z] Total : 1653.24 103.33 746.35 0.00 236477.28 6826.67 272629.76 00:24:02.257 [2024-11-28 08:22:59.470295] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:02.257 [2024-11-28 08:22:59.470348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:24:02.257 [2024-11-28 08:22:59.470781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.257 [2024-11-28 08:22:59.470801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b84cc0 with addr=10.0.0.2, port=4420 00:24:02.257 [2024-11-28 08:22:59.470812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b84cc0 is same with the state(6) to be set 00:24:02.257 [2024-11-28 08:22:59.471137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.257 [2024-11-28 08:22:59.471154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b84850 with addr=10.0.0.2, port=4420 00:24:02.257 [2024-11-28 08:22:59.471168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b84850 is same with the state(6) to be set 00:24:02.257 [2024-11-28 08:22:59.471501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.257 [2024-11-28 08:22:59.471511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb0180 with addr=10.0.0.2, port=4420 00:24:02.257 [2024-11-28 08:22:59.471518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb0180 is same with the state(6) to be set 00:24:02.257 [2024-11-28 08:22:59.471804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.257 [2024-11-28 08:22:59.471814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fa5c90 with addr=10.0.0.2, port=4420 00:24:02.257 [2024-11-28 08:22:59.471821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa5c90 is same with the state(6) to be set 00:24:02.257 [2024-11-28 08:22:59.473444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:24:02.257 [2024-11-28 08:22:59.473461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:24:02.257 [2024-11-28 08:22:59.473471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:24:02.257 [2024-11-28 08:22:59.473480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:24:02.257 [2024-11-28 08:22:59.473743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.257 [2024-11-28 08:22:59.473759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9c610 with addr=10.0.0.2, port=4420 00:24:02.257 [2024-11-28 08:22:59.473766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9c610 is same with the state(6) to be set 00:24:02.257 [2024-11-28 08:22:59.473955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.257 [2024-11-28 08:22:59.473965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fa56a0 with addr=10.0.0.2, port=4420 00:24:02.257 [2024-11-28 08:22:59.473973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa56a0 is same with the state(6) to be set 00:24:02.257 [2024-11-28 08:22:59.473985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b84cc0 (9): Bad file descriptor 00:24:02.257 [2024-11-28 08:22:59.473997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b84850 (9): Bad file descriptor 00:24:02.257 [2024-11-28 08:22:59.474007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb0180 (9): Bad file descriptor 00:24:02.257 [2024-11-28 08:22:59.474016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa5c90 (9): Bad file descriptor 00:24:02.257 [2024-11-28 08:22:59.474052] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:24:02.257 [2024-11-28 08:22:59.474065] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:24:02.257 [2024-11-28 08:22:59.474075] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:24:02.257 [2024-11-28 08:22:59.474086] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:24:02.257 [2024-11-28 08:22:59.474658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.257 [2024-11-28 08:22:59.474675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b82fc0 with addr=10.0.0.2, port=4420 00:24:02.257 [2024-11-28 08:22:59.474687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b82fc0 is same with the state(6) to be set 00:24:02.257 [2024-11-28 08:22:59.474873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.257 [2024-11-28 08:22:59.474883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1feed50 with addr=10.0.0.2, port=4420 00:24:02.257 [2024-11-28 08:22:59.474891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1feed50 is same with the state(6) to be set 00:24:02.257 [2024-11-28 08:22:59.475203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.257 [2024-11-28 08:22:59.475214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fb0980 with addr=10.0.0.2, port=4420 00:24:02.257 [2024-11-28 08:22:59.475221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb0980 is same with the state(6) to be set 00:24:02.257 [2024-11-28 08:22:59.475550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.257 [2024-11-28 08:22:59.475559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ffcbb0 with addr=10.0.0.2, port=4420 00:24:02.257 [2024-11-28 08:22:59.475567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffcbb0 is same with the state(6) to be set 00:24:02.257 [2024-11-28 08:22:59.475576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9c610 (9): Bad file descriptor 00:24:02.257 [2024-11-28 08:22:59.475586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa56a0 (9): Bad file descriptor 00:24:02.257 [2024-11-28 08:22:59.475594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:24:02.257 [2024-11-28 08:22:59.475601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:24:02.257 [2024-11-28 08:22:59.475609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:02.257 [2024-11-28 08:22:59.475618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:24:02.257 [2024-11-28 08:22:59.475627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:24:02.257 [2024-11-28 08:22:59.475633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:24:02.257 [2024-11-28 08:22:59.475640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:24:02.257 [2024-11-28 08:22:59.475646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:24:02.257 [2024-11-28 08:22:59.475653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:24:02.257 [2024-11-28 08:22:59.475660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:24:02.257 [2024-11-28 08:22:59.475667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:24:02.257 [2024-11-28 08:22:59.475673] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:24:02.257 [2024-11-28 08:22:59.475680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:24:02.257 [2024-11-28 08:22:59.475687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:24:02.257 [2024-11-28 08:22:59.475694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:24:02.257 [2024-11-28 08:22:59.475700] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:24:02.257 [2024-11-28 08:22:59.475778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b82fc0 (9): Bad file descriptor 00:24:02.257 [2024-11-28 08:22:59.475792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1feed50 (9): Bad file descriptor 00:24:02.257 [2024-11-28 08:22:59.475802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb0980 (9): Bad file descriptor 00:24:02.257 [2024-11-28 08:22:59.475811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ffcbb0 (9): Bad file descriptor 00:24:02.257 [2024-11-28 08:22:59.475819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:24:02.257 [2024-11-28 08:22:59.475825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:24:02.257 [2024-11-28 08:22:59.475832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:24:02.257 [2024-11-28 08:22:59.475839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:24:02.257 [2024-11-28 08:22:59.475846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:24:02.258 [2024-11-28 08:22:59.475853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:24:02.258 [2024-11-28 08:22:59.475860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:24:02.258 [2024-11-28 08:22:59.475866] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:24:02.258 [2024-11-28 08:22:59.475894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:24:02.258 [2024-11-28 08:22:59.475902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:24:02.258 [2024-11-28 08:22:59.475909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:24:02.258 [2024-11-28 08:22:59.475915] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:24:02.258 [2024-11-28 08:22:59.475922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:24:02.258 [2024-11-28 08:22:59.475929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:24:02.258 [2024-11-28 08:22:59.475936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:24:02.258 [2024-11-28 08:22:59.475942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:24:02.258 [2024-11-28 08:22:59.475949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:24:02.258 [2024-11-28 08:22:59.475955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:24:02.258 [2024-11-28 08:22:59.475963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:24:02.258 [2024-11-28 08:22:59.475969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:24:02.258 [2024-11-28 08:22:59.475976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:24:02.258 [2024-11-28 08:22:59.475982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:24:02.258 [2024-11-28 08:22:59.475989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:24:02.258 [2024-11-28 08:22:59.475996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:24:02.517 08:22:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2038732 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2038732 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2038732 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:03.456 rmmod nvme_tcp 00:24:03.456 rmmod nvme_fabrics 00:24:03.456 rmmod nvme_keyring 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2038492 ']' 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2038492 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2038492 ']' 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2038492 00:24:03.456 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2038492) - No such process 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2038492 is not found' 00:24:03.456 Process with pid 2038492 is not found 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:03.456 08:23:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:06.002 00:24:06.002 real 0m7.923s 00:24:06.002 user 0m19.887s 00:24:06.002 sys 0m1.202s 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:06.002 ************************************ 00:24:06.002 END TEST nvmf_shutdown_tc3 00:24:06.002 ************************************ 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:06.002 ************************************ 00:24:06.002 START TEST nvmf_shutdown_tc4 00:24:06.002 ************************************ 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:24:06.002 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:06.003 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:06.003 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:06.003 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:06.003 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:06.003 08:23:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:06.003 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:06.003 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:06.003 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:06.003 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:06.003 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:06.003 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:06.003 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:06.003 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:06.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:06.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:24:06.003 00:24:06.003 --- 10.0.0.2 ping statistics --- 00:24:06.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.003 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:24:06.003 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:06.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:06.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:24:06.003 00:24:06.003 --- 10.0.0.1 ping statistics --- 00:24:06.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.003 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:24:06.003 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:06.003 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:24:06.003 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:06.003 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:06.003 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:06.004 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:06.004 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:06.004 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:06.004 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:06.004 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:06.004 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:06.004 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:06.004 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:06.004 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2040174 00:24:06.004 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2040174 00:24:06.004 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:06.004 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2040174 ']' 00:24:06.004 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.004 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:06.004 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.004 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:06.004 08:23:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:06.264 [2024-11-28 08:23:03.335769] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:24:06.264 [2024-11-28 08:23:03.335840] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:06.264 [2024-11-28 08:23:03.430786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:06.264 [2024-11-28 08:23:03.464687] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:06.264 [2024-11-28 08:23:03.464718] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:06.264 [2024-11-28 08:23:03.464724] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:06.264 [2024-11-28 08:23:03.464729] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:06.264 [2024-11-28 08:23:03.464733] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:06.264 [2024-11-28 08:23:03.466056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:06.264 [2024-11-28 08:23:03.466209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:06.264 [2024-11-28 08:23:03.466535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:06.264 [2024-11-28 08:23:03.466535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:07.204 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:07.204 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:24:07.204 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:07.204 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:07.204 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:07.204 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:07.204 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:07.204 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.204 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:07.204 [2024-11-28 08:23:04.185999] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:07.204 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.204 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:07.204 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:07.204 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:07.204 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:07.204 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:07.204 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:07.204 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:07.204 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:07.204 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:07.204 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:07.204 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:07.204 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:07.204 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:07.205 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:07.205 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:07.205 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:07.205 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:07.205 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:07.205 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:07.205 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:07.205 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:07.205 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:07.205 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:07.205 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:07.205 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:07.205 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:07.205 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.205 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:07.205 Malloc1 00:24:07.205 [2024-11-28 08:23:04.299987] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:07.205 Malloc2 00:24:07.205 Malloc3 00:24:07.205 Malloc4 00:24:07.205 Malloc5 00:24:07.205 Malloc6 00:24:07.464 Malloc7 00:24:07.464 Malloc8 00:24:07.464 Malloc9 00:24:07.464 Malloc10 00:24:07.464 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.464 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:07.465 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:07.465 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:07.465 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2040549 00:24:07.465 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:24:07.465 08:23:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:24:07.723 [2024-11-28 08:23:04.783909] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:13.047 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:13.047 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2040174 00:24:13.047 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2040174 ']' 00:24:13.047 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2040174 00:24:13.047 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:24:13.047 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:13.047 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2040174 00:24:13.047 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:13.047 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:13.047 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2040174' 00:24:13.047 killing process with pid 2040174 00:24:13.047 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2040174 00:24:13.047 08:23:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2040174 00:24:13.047 [2024-11-28 08:23:09.779087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc01330 is same with the state(6) to be set 00:24:13.047 [2024-11-28 08:23:09.779132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc01330 is same with the state(6) to be set 00:24:13.047 [2024-11-28 08:23:09.779138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc01330 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.779144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc01330 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.779148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc01330 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.779153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc01330 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.779162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc01330 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.779168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc01330 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.779172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc01330 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.779293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc01800 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.779325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc01800 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.779744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc00e60 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.779766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc00e60 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.779773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc00e60 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.779778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc00e60 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.779782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc00e60 is same with the state(6) to be set 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 starting I/O failed: -6 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 starting I/O failed: -6 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 starting I/O failed: -6 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 starting I/O failed: -6 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 starting I/O failed: -6 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 starting I/O failed: -6 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 starting I/O failed: -6 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 [2024-11-28 08:23:09.780439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:13.048 NVMe io qpair process completion error 00:24:13.048 [2024-11-28 08:23:09.780640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff650 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.780663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff650 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.780669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff650 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.780674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff650 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.780679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff650 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.780684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff650 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.783056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc00990 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.783074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc00990 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.783079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc00990 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.783083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc00990 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.783348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffb20 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.783369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffb20 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.783374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffb20 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.783379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffb20 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.783384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffb20 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.783389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffb20 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.783394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffb20 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.783398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbffb20 is same with the state(6) to be set 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 starting I/O failed: -6 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 starting I/O failed: -6 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 starting I/O failed: -6 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 [2024-11-28 08:23:09.783882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02670 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.783901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02670 is same with the state(6) to be set 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 [2024-11-28 08:23:09.783906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02670 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.783911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02670 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.783916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02670 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.783921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02670 is same with the state(6) to be set 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 starting I/O failed: -6 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 starting I/O failed: -6 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 starting I/O failed: -6 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 starting I/O failed: -6 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 starting I/O failed: -6 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 [2024-11-28 08:23:09.784301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.048 [2024-11-28 08:23:09.784307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b40 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.784323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b40 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.784329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b40 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.784334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b40 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.784339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b40 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.784344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b40 is same with the state(6) to be set 00:24:13.048 [2024-11-28 08:23:09.784349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc02b40 is same with the state(6) to be set 00:24:13.048 starting I/O failed: -6 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 starting I/O failed: -6 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 starting I/O failed: -6 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 starting I/O failed: -6 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 starting I/O failed: -6 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 Write completed with error (sct=0, sc=8) 00:24:13.048 [2024-11-28 08:23:09.784629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03010 is same with the state(6) to be set 00:24:13.049 [2024-11-28 08:23:09.784644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03010 is same with the state(6) to be set 00:24:13.049 [2024-11-28 08:23:09.784649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03010 is same with tWrite completed with error (sct=0, sc=8) 00:24:13.049 he state(6) to be set 00:24:13.049 [2024-11-28 08:23:09.784666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03010 is same with the state(6) to be set 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 [2024-11-28 08:23:09.784704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc021a0 is same with the state(6) to be set 00:24:13.049 [2024-11-28 08:23:09.784718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc021a0 is same with the state(6) to be set 00:24:13.049 [2024-11-28 08:23:09.784723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc021a0 is same with tWrite completed with error (sct=0, sc=8) 00:24:13.049 he state(6) to be set 00:24:13.049 [2024-11-28 08:23:09.784730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc021a0 is same with the state(6) to be set 00:24:13.049 starting I/O failed: -6 00:24:13.049 [2024-11-28 08:23:09.784735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc021a0 is same with the state(6) to be set 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 [2024-11-28 08:23:09.784740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc021a0 is same with the state(6) to be set 00:24:13.049 [2024-11-28 08:23:09.784745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc021a0 is same with the state(6) to be set 00:24:13.049 starting I/O failed: -6 00:24:13.049 [2024-11-28 08:23:09.784750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc021a0 is same with the state(6) to be set 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 [2024-11-28 08:23:09.785240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.049 starting I/O failed: -6 00:24:13.049 starting I/O failed: -6 00:24:13.049 starting I/O failed: -6 00:24:13.049 starting I/O failed: -6 00:24:13.049 starting I/O failed: -6 00:24:13.049 starting I/O failed: -6 00:24:13.049 starting I/O failed: -6 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 [2024-11-28 08:23:09.786435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.049 Write completed with error (sct=0, sc=8) 00:24:13.049 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 [2024-11-28 08:23:09.787850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:13.050 NVMe io qpair process completion error 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 [2024-11-28 08:23:09.789184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 [2024-11-28 08:23:09.789983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.050 starting I/O failed: -6 00:24:13.050 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 [2024-11-28 08:23:09.790896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 [2024-11-28 08:23:09.792360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:13.051 NVMe io qpair process completion error 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 starting I/O failed: -6 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.051 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 [2024-11-28 08:23:09.793477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 [2024-11-28 08:23:09.794295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 [2024-11-28 08:23:09.795236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.052 starting I/O failed: -6 00:24:13.052 starting I/O failed: -6 00:24:13.052 starting I/O failed: -6 00:24:13.052 starting I/O failed: -6 00:24:13.052 starting I/O failed: -6 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.052 Write completed with error (sct=0, sc=8) 00:24:13.052 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 [2024-11-28 08:23:09.798834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:13.053 NVMe io qpair process completion error 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 [2024-11-28 08:23:09.800113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 [2024-11-28 08:23:09.800929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.053 Write completed with error (sct=0, sc=8) 00:24:13.053 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 [2024-11-28 08:23:09.801850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 [2024-11-28 08:23:09.803472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:13.054 NVMe io qpair process completion error 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 starting I/O failed: -6 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.054 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 [2024-11-28 08:23:09.804549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 [2024-11-28 08:23:09.805475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 [2024-11-28 08:23:09.806392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.055 starting I/O failed: -6 00:24:13.055 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 [2024-11-28 08:23:09.808573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.056 NVMe io qpair process completion error 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 [2024-11-28 08:23:09.809757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 [2024-11-28 08:23:09.810661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.056 starting I/O failed: -6 00:24:13.056 Write completed with error (sct=0, sc=8) 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 [2024-11-28 08:23:09.811584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.057 starting I/O failed: -6 00:24:13.057 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 [2024-11-28 08:23:09.814339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.058 NVMe io qpair process completion error 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 [2024-11-28 08:23:09.815432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 [2024-11-28 08:23:09.816329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 [2024-11-28 08:23:09.817243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.058 Write completed with error (sct=0, sc=8) 00:24:13.058 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 [2024-11-28 08:23:09.818671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:13.059 NVMe io qpair process completion error 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 [2024-11-28 08:23:09.819789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.059 Write completed with error (sct=0, sc=8) 00:24:13.059 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 [2024-11-28 08:23:09.820798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 [2024-11-28 08:23:09.821748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:13.060 starting I/O failed: -6 00:24:13.060 starting I/O failed: -6 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 starting I/O failed: -6 00:24:13.060 [2024-11-28 08:23:09.825236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.060 NVMe io qpair process completion error 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.060 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 [2024-11-28 08:23:09.827127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.061 NVMe io qpair process completion error 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 starting I/O failed: -6 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 starting I/O failed: -6 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 starting I/O failed: -6 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 starting I/O failed: -6 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 starting I/O failed: -6 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 starting I/O failed: -6 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 starting I/O failed: -6 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 starting I/O failed: -6 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 starting I/O failed: -6 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 starting I/O failed: -6 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 starting I/O failed: -6 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 starting I/O failed: -6 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 starting I/O failed: -6 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 starting I/O failed: -6 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 starting I/O failed: -6 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 starting I/O failed: -6 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 starting I/O failed: -6 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 starting I/O failed: -6 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 starting I/O failed: -6 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 starting I/O failed: -6 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 starting I/O failed: -6 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 starting I/O failed: -6 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 starting I/O failed: -6 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 starting I/O failed: -6 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 starting I/O failed: -6 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 starting I/O failed: -6 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 starting I/O failed: -6 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.061 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 [2024-11-28 08:23:09.828839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 [2024-11-28 08:23:09.829789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.062 Write completed with error (sct=0, sc=8) 00:24:13.062 starting I/O failed: -6 00:24:13.063 Write completed with error (sct=0, sc=8) 00:24:13.063 starting I/O failed: -6 00:24:13.063 Write completed with error (sct=0, sc=8) 00:24:13.063 starting I/O failed: -6 00:24:13.063 Write completed with error (sct=0, sc=8) 00:24:13.063 starting I/O failed: -6 00:24:13.063 Write completed with error (sct=0, sc=8) 00:24:13.063 starting I/O failed: -6 00:24:13.063 Write completed with error (sct=0, sc=8) 00:24:13.063 starting I/O failed: -6 00:24:13.063 Write completed with error (sct=0, sc=8) 00:24:13.063 starting I/O failed: -6 00:24:13.063 Write completed with error (sct=0, sc=8) 00:24:13.063 starting I/O failed: -6 00:24:13.063 Write completed with error (sct=0, sc=8) 00:24:13.063 starting I/O failed: -6 00:24:13.063 Write completed with error (sct=0, sc=8) 00:24:13.063 starting I/O failed: -6 00:24:13.063 Write completed with error (sct=0, sc=8) 00:24:13.063 starting I/O failed: -6 00:24:13.063 Write completed with error (sct=0, sc=8) 00:24:13.063 starting I/O failed: -6 00:24:13.063 Write completed with error (sct=0, sc=8) 00:24:13.063 starting I/O failed: -6 00:24:13.063 [2024-11-28 08:23:09.831666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:13.063 NVMe io qpair process completion error 00:24:13.063 Initializing NVMe Controllers 00:24:13.063 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:24:13.063 Controller IO queue size 128, less than required. 00:24:13.063 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:13.063 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:24:13.063 Controller IO queue size 128, less than required. 00:24:13.063 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:13.063 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:24:13.063 Controller IO queue size 128, less than required. 00:24:13.063 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:13.063 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:24:13.063 Controller IO queue size 128, less than required. 00:24:13.063 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:13.063 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:24:13.063 Controller IO queue size 128, less than required. 00:24:13.063 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:13.063 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:24:13.063 Controller IO queue size 128, less than required. 00:24:13.063 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:13.063 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:24:13.063 Controller IO queue size 128, less than required. 00:24:13.063 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:13.063 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:24:13.063 Controller IO queue size 128, less than required. 00:24:13.063 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:13.063 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:13.063 Controller IO queue size 128, less than required. 00:24:13.063 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:13.063 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:24:13.063 Controller IO queue size 128, less than required. 00:24:13.063 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:13.063 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:24:13.063 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:24:13.063 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:24:13.063 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:24:13.063 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:24:13.063 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:24:13.063 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:24:13.063 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:24:13.063 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:13.063 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:24:13.063 Initialization complete. Launching workers. 00:24:13.063 ======================================================== 00:24:13.063 Latency(us) 00:24:13.063 Device Information : IOPS MiB/s Average min max 00:24:13.063 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1900.78 81.67 67359.79 653.41 126421.24 00:24:13.063 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1891.52 81.28 67711.06 838.93 127556.12 00:24:13.063 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1877.85 80.69 68232.93 818.58 126038.44 00:24:13.063 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1897.69 81.54 67562.12 830.68 125065.36 00:24:13.063 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1886.01 81.04 68001.93 837.56 123662.83 00:24:13.063 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1895.49 81.45 67694.07 862.09 125430.12 00:24:13.063 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1882.48 80.89 68199.27 897.23 124888.24 00:24:13.063 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1858.68 79.87 69099.09 548.82 119107.03 00:24:13.063 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1877.85 80.69 68371.52 634.62 133365.65 00:24:13.063 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1927.01 82.80 66672.23 803.84 126374.31 00:24:13.063 ======================================================== 00:24:13.063 Total : 18895.37 811.91 67884.83 548.82 133365.65 00:24:13.063 00:24:13.063 [2024-11-28 08:23:09.839064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a4ae0 is same with the state(6) to be set 00:24:13.063 [2024-11-28 08:23:09.839109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a2ef0 is same with the state(6) to be set 00:24:13.063 [2024-11-28 08:23:09.839138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a3a70 is same with the state(6) to be set 00:24:13.063 [2024-11-28 08:23:09.839181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a3410 is same with the state(6) to be set 00:24:13.063 [2024-11-28 08:23:09.839211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a3740 is same with the state(6) to be set 00:24:13.063 [2024-11-28 08:23:09.839240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a2bc0 is same with the state(6) to be set 00:24:13.063 [2024-11-28 08:23:09.839268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a4900 is same with the state(6) to be set 00:24:13.063 [2024-11-28 08:23:09.839297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a2560 is same with the state(6) to be set 00:24:13.063 [2024-11-28 08:23:09.839325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a4720 is same with the state(6) to be set 00:24:13.063 [2024-11-28 08:23:09.839353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10a2890 is same with the state(6) to be set 00:24:13.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:24:13.063 08:23:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2040549 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2040549 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2040549 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:14.007 rmmod nvme_tcp 00:24:14.007 rmmod nvme_fabrics 00:24:14.007 rmmod nvme_keyring 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2040174 ']' 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2040174 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2040174 ']' 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2040174 00:24:14.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2040174) - No such process 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2040174 is not found' 00:24:14.007 Process with pid 2040174 is not found 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:14.007 08:23:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.921 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:15.921 00:24:15.921 real 0m10.288s 00:24:15.921 user 0m28.141s 00:24:15.921 sys 0m3.939s 00:24:15.921 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:15.921 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:15.921 ************************************ 00:24:15.921 END TEST nvmf_shutdown_tc4 00:24:15.921 ************************************ 00:24:16.181 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:24:16.181 00:24:16.181 real 0m43.613s 00:24:16.181 user 1m46.472s 00:24:16.181 sys 0m13.778s 00:24:16.181 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:16.182 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:16.182 ************************************ 00:24:16.182 END TEST nvmf_shutdown 00:24:16.182 ************************************ 00:24:16.182 08:23:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:16.182 08:23:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:16.182 08:23:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:16.182 08:23:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:16.182 ************************************ 00:24:16.182 START TEST nvmf_nsid 00:24:16.182 ************************************ 00:24:16.182 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:16.182 * Looking for test storage... 00:24:16.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:16.182 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:16.182 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:24:16.182 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:16.443 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:16.443 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:16.443 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:16.443 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:16.443 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:24:16.443 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:24:16.443 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:24:16.443 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:24:16.443 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:24:16.443 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:24:16.443 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:24:16.443 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:16.443 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:24:16.443 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:24:16.443 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:16.443 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:16.443 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:24:16.443 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:24:16.443 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:16.443 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:24:16.443 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:24:16.443 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:24:16.443 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:24:16.443 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:16.443 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:24:16.443 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:24:16.443 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:16.443 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:16.443 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:24:16.443 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:16.443 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:16.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.443 --rc genhtml_branch_coverage=1 00:24:16.443 --rc genhtml_function_coverage=1 00:24:16.443 --rc genhtml_legend=1 00:24:16.443 --rc geninfo_all_blocks=1 00:24:16.443 --rc geninfo_unexecuted_blocks=1 00:24:16.443 00:24:16.443 ' 00:24:16.443 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:16.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.443 --rc genhtml_branch_coverage=1 00:24:16.443 --rc genhtml_function_coverage=1 00:24:16.443 --rc genhtml_legend=1 00:24:16.443 --rc geninfo_all_blocks=1 00:24:16.443 --rc geninfo_unexecuted_blocks=1 00:24:16.443 00:24:16.443 ' 00:24:16.443 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:16.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.443 --rc genhtml_branch_coverage=1 00:24:16.443 --rc genhtml_function_coverage=1 00:24:16.443 --rc genhtml_legend=1 00:24:16.443 --rc geninfo_all_blocks=1 00:24:16.443 --rc geninfo_unexecuted_blocks=1 00:24:16.443 00:24:16.443 ' 00:24:16.443 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:16.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.443 --rc genhtml_branch_coverage=1 00:24:16.443 --rc genhtml_function_coverage=1 00:24:16.443 --rc genhtml_legend=1 00:24:16.443 --rc geninfo_all_blocks=1 00:24:16.443 --rc geninfo_unexecuted_blocks=1 00:24:16.443 00:24:16.443 ' 00:24:16.443 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:16.443 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:16.444 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:24:16.444 08:23:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:24.587 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:24.587 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:24.587 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:24.587 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:24.587 08:23:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:24.587 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:24.587 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:24.587 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:24.587 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:24.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:24.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.689 ms 00:24:24.588 00:24:24.588 --- 10.0.0.2 ping statistics --- 00:24:24.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.588 rtt min/avg/max/mdev = 0.689/0.689/0.689/0.000 ms 00:24:24.588 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:24.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:24.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:24:24.588 00:24:24.588 --- 10.0.0.1 ping statistics --- 00:24:24.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.588 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:24:24.588 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:24.588 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:24:24.588 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:24.588 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:24.588 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:24.588 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:24.588 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:24.588 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:24.588 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:24.588 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:24:24.588 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:24.588 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:24.588 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:24.588 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2045903 00:24:24.588 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2045903 00:24:24.588 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:24:24.588 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2045903 ']' 00:24:24.588 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.588 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:24.588 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.588 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:24.588 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:24.588 [2024-11-28 08:23:21.190087] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:24:24.588 [2024-11-28 08:23:21.190165] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.588 [2024-11-28 08:23:21.287900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.588 [2024-11-28 08:23:21.339430] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.588 [2024-11-28 08:23:21.339480] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.588 [2024-11-28 08:23:21.339489] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:24.588 [2024-11-28 08:23:21.339497] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:24.588 [2024-11-28 08:23:21.339503] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.588 [2024-11-28 08:23:21.340283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.849 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:24.849 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:24.849 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:24.849 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:24.849 08:23:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:24.849 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:24.849 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:24.849 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2046094 00:24:24.849 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:24:24.849 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:24:24.849 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:24:24.849 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:24:24.849 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.849 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.849 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.849 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.849 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.849 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.849 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.849 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.849 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.849 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:24:24.849 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:24:24.849 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=7b5c7c87-acab-4e03-a686-e655318fe2bc 00:24:24.849 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:24:24.849 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=8e5ad46e-5aaf-45b8-ac02-c0fb71d73cdc 00:24:24.849 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:24:24.849 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=a88e4653-b61e-4097-9c51-418dc87b9827 00:24:24.849 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:24:24.849 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.849 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:24.849 null0 00:24:24.849 null1 00:24:24.849 [2024-11-28 08:23:22.105252] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:24:24.849 [2024-11-28 08:23:22.105322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2046094 ] 00:24:24.849 null2 00:24:24.849 [2024-11-28 08:23:22.109216] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:24.849 [2024-11-28 08:23:22.133525] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:25.110 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.110 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2046094 /var/tmp/tgt2.sock 00:24:25.110 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2046094 ']' 00:24:25.110 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:24:25.110 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:25.110 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:24:25.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:24:25.110 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:25.110 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:25.110 [2024-11-28 08:23:22.198037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.110 [2024-11-28 08:23:22.251147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:25.370 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:25.370 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:25.370 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:24:25.631 [2024-11-28 08:23:22.818496] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:25.631 [2024-11-28 08:23:22.834684] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:24:25.631 nvme0n1 nvme0n2 00:24:25.631 nvme1n1 00:24:25.631 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:24:25.631 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:24:25.631 08:23:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:27.169 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:24:27.169 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:24:27.169 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:24:27.169 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:24:27.169 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:24:27.169 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:24:27.169 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:24:27.169 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:27.169 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:27.169 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:27.169 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:24:27.169 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:24:27.169 08:23:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:24:28.158 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:28.158 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:28.158 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:28.158 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:24:28.158 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:28.158 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 7b5c7c87-acab-4e03-a686-e655318fe2bc 00:24:28.158 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:28.158 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:24:28.158 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:24:28.158 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:24:28.158 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:28.158 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=7b5c7c87acab4e03a686e655318fe2bc 00:24:28.158 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 7B5C7C87ACAB4E03A686E655318FE2BC 00:24:28.158 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 7B5C7C87ACAB4E03A686E655318FE2BC == \7\B\5\C\7\C\8\7\A\C\A\B\4\E\0\3\A\6\8\6\E\6\5\5\3\1\8\F\E\2\B\C ]] 00:24:28.158 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:24:28.158 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:28.158 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:24:28.158 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:28.158 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:28.158 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:24:28.158 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:28.158 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 8e5ad46e-5aaf-45b8-ac02-c0fb71d73cdc 00:24:28.159 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:28.420 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:24:28.420 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:24:28.420 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:24:28.420 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:28.420 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=8e5ad46e5aaf45b8ac02c0fb71d73cdc 00:24:28.420 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 8E5AD46E5AAF45B8AC02C0FB71D73CDC 00:24:28.420 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 8E5AD46E5AAF45B8AC02C0FB71D73CDC == \8\E\5\A\D\4\6\E\5\A\A\F\4\5\B\8\A\C\0\2\C\0\F\B\7\1\D\7\3\C\D\C ]] 00:24:28.420 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:24:28.420 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:28.420 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:28.420 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:24:28.420 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:28.420 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:24:28.420 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:28.420 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid a88e4653-b61e-4097-9c51-418dc87b9827 00:24:28.420 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:28.420 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:24:28.420 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:24:28.420 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:24:28.420 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:28.420 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=a88e4653b61e40979c51418dc87b9827 00:24:28.420 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo A88E4653B61E40979C51418DC87B9827 00:24:28.420 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ A88E4653B61E40979C51418DC87B9827 == \A\8\8\E\4\6\5\3\B\6\1\E\4\0\9\7\9\C\5\1\4\1\8\D\C\8\7\B\9\8\2\7 ]] 00:24:28.420 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:24:28.681 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:24:28.681 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:24:28.681 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2046094 00:24:28.681 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2046094 ']' 00:24:28.681 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2046094 00:24:28.681 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:28.681 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:28.681 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2046094 00:24:28.681 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:28.681 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:28.681 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2046094' 00:24:28.681 killing process with pid 2046094 00:24:28.681 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2046094 00:24:28.681 08:23:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2046094 00:24:28.941 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:24:28.941 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:28.941 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:24:28.941 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:28.941 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:24:28.941 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:28.941 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:28.941 rmmod nvme_tcp 00:24:28.941 rmmod nvme_fabrics 00:24:28.941 rmmod nvme_keyring 00:24:28.941 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:28.941 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:24:28.941 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:24:28.941 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2045903 ']' 00:24:28.941 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2045903 00:24:28.941 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2045903 ']' 00:24:28.941 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2045903 00:24:28.941 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:28.941 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:28.941 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2045903 00:24:28.941 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:28.941 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:28.941 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2045903' 00:24:28.941 killing process with pid 2045903 00:24:28.941 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2045903 00:24:28.941 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2045903 00:24:29.202 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:29.202 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:29.202 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:29.202 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:24:29.202 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:24:29.202 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:29.202 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:24:29.202 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:29.202 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:29.202 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.202 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:29.202 08:23:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.118 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:31.118 00:24:31.118 real 0m15.034s 00:24:31.118 user 0m11.440s 00:24:31.118 sys 0m6.968s 00:24:31.118 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:31.118 08:23:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:31.118 ************************************ 00:24:31.118 END TEST nvmf_nsid 00:24:31.118 ************************************ 00:24:31.118 08:23:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:24:31.118 00:24:31.118 real 13m1.094s 00:24:31.118 user 27m9.498s 00:24:31.118 sys 3m57.067s 00:24:31.118 08:23:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:31.379 08:23:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:31.379 ************************************ 00:24:31.379 END TEST nvmf_target_extra 00:24:31.379 ************************************ 00:24:31.379 08:23:28 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:31.379 08:23:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:31.379 08:23:28 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:31.379 08:23:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:31.379 ************************************ 00:24:31.379 START TEST nvmf_host 00:24:31.379 ************************************ 00:24:31.379 08:23:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:31.379 * Looking for test storage... 00:24:31.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:24:31.379 08:23:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:31.379 08:23:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:31.379 08:23:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:31.379 08:23:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:31.379 08:23:28 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:31.379 08:23:28 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:31.379 08:23:28 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:31.379 08:23:28 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:31.379 08:23:28 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:31.379 08:23:28 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:31.379 08:23:28 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:31.379 08:23:28 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:31.379 08:23:28 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:31.379 08:23:28 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:31.379 08:23:28 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:31.379 08:23:28 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:24:31.379 08:23:28 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:24:31.379 08:23:28 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:31.379 08:23:28 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:31.379 08:23:28 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:24:31.379 08:23:28 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:24:31.379 08:23:28 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:31.379 08:23:28 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:24:31.379 08:23:28 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:31.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.641 --rc genhtml_branch_coverage=1 00:24:31.641 --rc genhtml_function_coverage=1 00:24:31.641 --rc genhtml_legend=1 00:24:31.641 --rc geninfo_all_blocks=1 00:24:31.641 --rc geninfo_unexecuted_blocks=1 00:24:31.641 00:24:31.641 ' 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:31.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.641 --rc genhtml_branch_coverage=1 00:24:31.641 --rc genhtml_function_coverage=1 00:24:31.641 --rc genhtml_legend=1 00:24:31.641 --rc geninfo_all_blocks=1 00:24:31.641 --rc geninfo_unexecuted_blocks=1 00:24:31.641 00:24:31.641 ' 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:31.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.641 --rc genhtml_branch_coverage=1 00:24:31.641 --rc genhtml_function_coverage=1 00:24:31.641 --rc genhtml_legend=1 00:24:31.641 --rc geninfo_all_blocks=1 00:24:31.641 --rc geninfo_unexecuted_blocks=1 00:24:31.641 00:24:31.641 ' 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:31.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.641 --rc genhtml_branch_coverage=1 00:24:31.641 --rc genhtml_function_coverage=1 00:24:31.641 --rc genhtml_legend=1 00:24:31.641 --rc geninfo_all_blocks=1 00:24:31.641 --rc geninfo_unexecuted_blocks=1 00:24:31.641 00:24:31.641 ' 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:31.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:24:31.641 08:23:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:24:31.642 08:23:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:31.642 08:23:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:31.642 08:23:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:31.642 08:23:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.642 ************************************ 00:24:31.642 START TEST nvmf_multicontroller 00:24:31.642 ************************************ 00:24:31.642 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:31.642 * Looking for test storage... 00:24:31.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:31.642 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:31.642 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:24:31.642 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:31.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.904 --rc genhtml_branch_coverage=1 00:24:31.904 --rc genhtml_function_coverage=1 00:24:31.904 --rc genhtml_legend=1 00:24:31.904 --rc geninfo_all_blocks=1 00:24:31.904 --rc geninfo_unexecuted_blocks=1 00:24:31.904 00:24:31.904 ' 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:31.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.904 --rc genhtml_branch_coverage=1 00:24:31.904 --rc genhtml_function_coverage=1 00:24:31.904 --rc genhtml_legend=1 00:24:31.904 --rc geninfo_all_blocks=1 00:24:31.904 --rc geninfo_unexecuted_blocks=1 00:24:31.904 00:24:31.904 ' 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:31.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.904 --rc genhtml_branch_coverage=1 00:24:31.904 --rc genhtml_function_coverage=1 00:24:31.904 --rc genhtml_legend=1 00:24:31.904 --rc geninfo_all_blocks=1 00:24:31.904 --rc geninfo_unexecuted_blocks=1 00:24:31.904 00:24:31.904 ' 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:31.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:31.904 --rc genhtml_branch_coverage=1 00:24:31.904 --rc genhtml_function_coverage=1 00:24:31.904 --rc genhtml_legend=1 00:24:31.904 --rc geninfo_all_blocks=1 00:24:31.904 --rc geninfo_unexecuted_blocks=1 00:24:31.904 00:24:31.904 ' 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.904 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.905 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.905 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:31.905 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.905 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:24:31.905 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:31.905 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:31.905 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:31.905 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.905 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.905 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:31.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:31.905 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:31.905 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:31.905 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:31.905 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:31.905 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:31.905 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:31.905 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:31.905 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:31.905 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:31.905 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:31.905 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:31.905 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:31.905 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:31.905 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:31.905 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:31.905 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.905 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.905 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.905 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:31.905 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:31.905 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:24:31.905 08:23:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:40.046 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:40.046 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:40.046 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:40.046 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:40.046 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:40.047 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:40.047 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:40.047 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:40.047 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:40.047 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:40.047 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:40.047 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:40.047 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:40.047 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:40.047 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.584 ms 00:24:40.047 00:24:40.047 --- 10.0.0.2 ping statistics --- 00:24:40.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:40.047 rtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms 00:24:40.047 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:40.047 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:40.047 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:24:40.047 00:24:40.047 --- 10.0.0.1 ping statistics --- 00:24:40.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:40.047 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:24:40.047 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:40.047 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:24:40.047 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:40.047 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:40.047 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:40.047 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:40.047 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:40.047 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:40.047 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:40.047 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:40.047 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:40.047 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:40.047 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.047 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2051175 00:24:40.047 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2051175 00:24:40.047 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:40.047 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2051175 ']' 00:24:40.047 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.047 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:40.047 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.047 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:40.047 08:23:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.047 [2024-11-28 08:23:36.442995] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:24:40.047 [2024-11-28 08:23:36.443059] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:40.047 [2024-11-28 08:23:36.543615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:40.047 [2024-11-28 08:23:36.598555] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:40.047 [2024-11-28 08:23:36.598609] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:40.047 [2024-11-28 08:23:36.598617] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:40.047 [2024-11-28 08:23:36.598624] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:40.047 [2024-11-28 08:23:36.598631] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:40.047 [2024-11-28 08:23:36.600643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:40.047 [2024-11-28 08:23:36.600805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:40.047 [2024-11-28 08:23:36.600806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:40.047 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:40.047 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:40.047 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:40.047 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:40.047 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.047 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:40.047 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:40.047 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.047 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.047 [2024-11-28 08:23:37.322819] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:40.047 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.047 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:40.047 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.047 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.308 Malloc0 00:24:40.308 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.308 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:40.308 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.308 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.308 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.308 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:40.308 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.308 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.308 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.308 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:40.308 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.308 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.308 [2024-11-28 08:23:37.396396] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:40.308 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.308 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:40.308 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.308 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.308 [2024-11-28 08:23:37.408280] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:40.308 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.308 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:40.308 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.308 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.308 Malloc1 00:24:40.308 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.309 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:40.309 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.309 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.309 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.309 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:40.309 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.309 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.309 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.309 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:40.309 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.309 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.309 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.309 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:40.309 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.309 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.309 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.309 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2051399 00:24:40.309 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:40.309 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:40.309 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2051399 /var/tmp/bdevperf.sock 00:24:40.309 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2051399 ']' 00:24:40.309 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:40.309 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:40.309 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:40.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:40.309 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:40.309 08:23:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:41.252 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:41.252 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:41.252 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:41.252 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.252 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:41.513 NVMe0n1 00:24:41.513 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.513 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:41.513 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.513 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:41.513 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:41.513 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.513 1 00:24:41.513 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:41.513 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:41.513 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:41.513 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:41.513 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:41.513 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:41.513 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:41.513 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:41.513 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.513 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:41.513 request: 00:24:41.513 { 00:24:41.513 "name": "NVMe0", 00:24:41.513 "trtype": "tcp", 00:24:41.513 "traddr": "10.0.0.2", 00:24:41.513 "adrfam": "ipv4", 00:24:41.513 "trsvcid": "4420", 00:24:41.513 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:41.513 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:41.513 "hostaddr": "10.0.0.1", 00:24:41.513 "prchk_reftag": false, 00:24:41.513 "prchk_guard": false, 00:24:41.513 "hdgst": false, 00:24:41.513 "ddgst": false, 00:24:41.513 "allow_unrecognized_csi": false, 00:24:41.513 "method": "bdev_nvme_attach_controller", 00:24:41.513 "req_id": 1 00:24:41.513 } 00:24:41.514 Got JSON-RPC error response 00:24:41.514 response: 00:24:41.514 { 00:24:41.514 "code": -114, 00:24:41.514 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:41.514 } 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:41.514 request: 00:24:41.514 { 00:24:41.514 "name": "NVMe0", 00:24:41.514 "trtype": "tcp", 00:24:41.514 "traddr": "10.0.0.2", 00:24:41.514 "adrfam": "ipv4", 00:24:41.514 "trsvcid": "4420", 00:24:41.514 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:41.514 "hostaddr": "10.0.0.1", 00:24:41.514 "prchk_reftag": false, 00:24:41.514 "prchk_guard": false, 00:24:41.514 "hdgst": false, 00:24:41.514 "ddgst": false, 00:24:41.514 "allow_unrecognized_csi": false, 00:24:41.514 "method": "bdev_nvme_attach_controller", 00:24:41.514 "req_id": 1 00:24:41.514 } 00:24:41.514 Got JSON-RPC error response 00:24:41.514 response: 00:24:41.514 { 00:24:41.514 "code": -114, 00:24:41.514 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:41.514 } 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:41.514 request: 00:24:41.514 { 00:24:41.514 "name": "NVMe0", 00:24:41.514 "trtype": "tcp", 00:24:41.514 "traddr": "10.0.0.2", 00:24:41.514 "adrfam": "ipv4", 00:24:41.514 "trsvcid": "4420", 00:24:41.514 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:41.514 "hostaddr": "10.0.0.1", 00:24:41.514 "prchk_reftag": false, 00:24:41.514 "prchk_guard": false, 00:24:41.514 "hdgst": false, 00:24:41.514 "ddgst": false, 00:24:41.514 "multipath": "disable", 00:24:41.514 "allow_unrecognized_csi": false, 00:24:41.514 "method": "bdev_nvme_attach_controller", 00:24:41.514 "req_id": 1 00:24:41.514 } 00:24:41.514 Got JSON-RPC error response 00:24:41.514 response: 00:24:41.514 { 00:24:41.514 "code": -114, 00:24:41.514 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:24:41.514 } 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:41.514 request: 00:24:41.514 { 00:24:41.514 "name": "NVMe0", 00:24:41.514 "trtype": "tcp", 00:24:41.514 "traddr": "10.0.0.2", 00:24:41.514 "adrfam": "ipv4", 00:24:41.514 "trsvcid": "4420", 00:24:41.514 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:41.514 "hostaddr": "10.0.0.1", 00:24:41.514 "prchk_reftag": false, 00:24:41.514 "prchk_guard": false, 00:24:41.514 "hdgst": false, 00:24:41.514 "ddgst": false, 00:24:41.514 "multipath": "failover", 00:24:41.514 "allow_unrecognized_csi": false, 00:24:41.514 "method": "bdev_nvme_attach_controller", 00:24:41.514 "req_id": 1 00:24:41.514 } 00:24:41.514 Got JSON-RPC error response 00:24:41.514 response: 00:24:41.514 { 00:24:41.514 "code": -114, 00:24:41.514 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:41.514 } 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:41.514 NVMe0n1 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.514 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:41.775 00:24:41.775 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.775 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:41.775 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:41.775 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.775 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:41.775 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.775 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:41.775 08:23:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:43.160 { 00:24:43.160 "results": [ 00:24:43.160 { 00:24:43.160 "job": "NVMe0n1", 00:24:43.160 "core_mask": "0x1", 00:24:43.160 "workload": "write", 00:24:43.160 "status": "finished", 00:24:43.160 "queue_depth": 128, 00:24:43.160 "io_size": 4096, 00:24:43.160 "runtime": 1.005966, 00:24:43.160 "iops": 24984.93984886169, 00:24:43.160 "mibps": 97.59742128461598, 00:24:43.160 "io_failed": 0, 00:24:43.160 "io_timeout": 0, 00:24:43.160 "avg_latency_us": 5111.905725577571, 00:24:43.160 "min_latency_us": 2157.2266666666665, 00:24:43.160 "max_latency_us": 11632.64 00:24:43.160 } 00:24:43.160 ], 00:24:43.160 "core_count": 1 00:24:43.160 } 00:24:43.160 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:43.160 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.160 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:43.160 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.160 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:24:43.160 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2051399 00:24:43.160 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2051399 ']' 00:24:43.160 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2051399 00:24:43.160 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:43.160 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:43.160 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2051399 00:24:43.160 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:43.160 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:43.160 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2051399' 00:24:43.160 killing process with pid 2051399 00:24:43.160 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2051399 00:24:43.160 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2051399 00:24:43.160 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:43.160 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.160 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:43.160 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.160 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:43.160 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.161 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:43.161 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.161 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:24:43.161 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:43.161 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:43.161 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:43.161 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:24:43.161 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:24:43.161 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:43.161 [2024-11-28 08:23:37.539890] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:24:43.161 [2024-11-28 08:23:37.539965] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2051399 ] 00:24:43.161 [2024-11-28 08:23:37.633411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.161 [2024-11-28 08:23:37.686610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.161 [2024-11-28 08:23:38.893516] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 83334ff1-7f1b-4b6d-bedf-70c962806f7c already exists 00:24:43.161 [2024-11-28 08:23:38.893564] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:83334ff1-7f1b-4b6d-bedf-70c962806f7c alias for bdev NVMe1n1 00:24:43.161 [2024-11-28 08:23:38.893575] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:43.161 Running I/O for 1 seconds... 00:24:43.161 24942.00 IOPS, 97.43 MiB/s 00:24:43.161 Latency(us) 00:24:43.161 [2024-11-28T07:23:40.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:43.161 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:43.161 NVMe0n1 : 1.01 24984.94 97.60 0.00 0.00 5111.91 2157.23 11632.64 00:24:43.161 [2024-11-28T07:23:40.450Z] =================================================================================================================== 00:24:43.161 [2024-11-28T07:23:40.450Z] Total : 24984.94 97.60 0.00 0.00 5111.91 2157.23 11632.64 00:24:43.161 Received shutdown signal, test time was about 1.000000 seconds 00:24:43.161 00:24:43.161 Latency(us) 00:24:43.161 [2024-11-28T07:23:40.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:43.161 [2024-11-28T07:23:40.450Z] =================================================================================================================== 00:24:43.161 [2024-11-28T07:23:40.450Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:43.161 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:43.161 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:43.161 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:43.161 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:24:43.161 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:43.161 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:24:43.161 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:43.161 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:24:43.161 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:43.161 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:43.161 rmmod nvme_tcp 00:24:43.161 rmmod nvme_fabrics 00:24:43.161 rmmod nvme_keyring 00:24:43.161 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:43.161 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:24:43.161 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:24:43.161 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2051175 ']' 00:24:43.161 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2051175 00:24:43.161 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2051175 ']' 00:24:43.161 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2051175 00:24:43.161 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:43.161 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:43.161 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2051175 00:24:43.161 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:43.161 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:43.161 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2051175' 00:24:43.161 killing process with pid 2051175 00:24:43.161 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2051175 00:24:43.161 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2051175 00:24:43.422 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:43.422 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:43.422 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:43.422 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:24:43.422 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:24:43.422 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:43.422 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:24:43.422 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:43.422 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:43.422 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.423 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:43.423 08:23:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.336 08:23:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:45.336 00:24:45.336 real 0m13.875s 00:24:45.336 user 0m16.993s 00:24:45.336 sys 0m6.492s 00:24:45.336 08:23:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:45.336 08:23:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:45.336 ************************************ 00:24:45.336 END TEST nvmf_multicontroller 00:24:45.336 ************************************ 00:24:45.596 08:23:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:45.596 08:23:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:45.596 08:23:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:45.596 08:23:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.596 ************************************ 00:24:45.596 START TEST nvmf_aer 00:24:45.596 ************************************ 00:24:45.596 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:45.596 * Looking for test storage... 00:24:45.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:45.596 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:45.596 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:24:45.596 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:45.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.857 --rc genhtml_branch_coverage=1 00:24:45.857 --rc genhtml_function_coverage=1 00:24:45.857 --rc genhtml_legend=1 00:24:45.857 --rc geninfo_all_blocks=1 00:24:45.857 --rc geninfo_unexecuted_blocks=1 00:24:45.857 00:24:45.857 ' 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:45.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.857 --rc genhtml_branch_coverage=1 00:24:45.857 --rc genhtml_function_coverage=1 00:24:45.857 --rc genhtml_legend=1 00:24:45.857 --rc geninfo_all_blocks=1 00:24:45.857 --rc geninfo_unexecuted_blocks=1 00:24:45.857 00:24:45.857 ' 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:45.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.857 --rc genhtml_branch_coverage=1 00:24:45.857 --rc genhtml_function_coverage=1 00:24:45.857 --rc genhtml_legend=1 00:24:45.857 --rc geninfo_all_blocks=1 00:24:45.857 --rc geninfo_unexecuted_blocks=1 00:24:45.857 00:24:45.857 ' 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:45.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.857 --rc genhtml_branch_coverage=1 00:24:45.857 --rc genhtml_function_coverage=1 00:24:45.857 --rc genhtml_legend=1 00:24:45.857 --rc geninfo_all_blocks=1 00:24:45.857 --rc geninfo_unexecuted_blocks=1 00:24:45.857 00:24:45.857 ' 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:24:45.857 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:45.858 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:45.858 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:45.858 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:45.858 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:45.858 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:45.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:45.858 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:45.858 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:45.858 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:45.858 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:45.858 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:45.858 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:45.858 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:45.858 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:45.858 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:45.858 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.858 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:45.858 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.858 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:45.858 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:45.858 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:24:45.858 08:23:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:53.999 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:53.999 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:24:53.999 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:53.999 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:53.999 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:53.999 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:53.999 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:53.999 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:24:53.999 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:53.999 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:24:53.999 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:24:53.999 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:24:53.999 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:24:53.999 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:24:53.999 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:24:53.999 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:53.999 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:53.999 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:53.999 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:53.999 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:53.999 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:53.999 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:54.000 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:54.000 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:54.000 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:54.000 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:54.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:54.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.695 ms 00:24:54.000 00:24:54.000 --- 10.0.0.2 ping statistics --- 00:24:54.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.000 rtt min/avg/max/mdev = 0.695/0.695/0.695/0.000 ms 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:54.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:54.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:24:54.000 00:24:54.000 --- 10.0.0.1 ping statistics --- 00:24:54.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.000 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2056091 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2056091 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2056091 ']' 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:54.000 08:23:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.000 [2024-11-28 08:23:50.487231] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:24:54.000 [2024-11-28 08:23:50.487299] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.000 [2024-11-28 08:23:50.593191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:54.000 [2024-11-28 08:23:50.647673] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:54.000 [2024-11-28 08:23:50.647734] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:54.000 [2024-11-28 08:23:50.647743] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:54.000 [2024-11-28 08:23:50.647749] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:54.000 [2024-11-28 08:23:50.647756] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:54.001 [2024-11-28 08:23:50.649784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:54.001 [2024-11-28 08:23:50.649954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:54.001 [2024-11-28 08:23:50.650114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:54.001 [2024-11-28 08:23:50.650117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.263 [2024-11-28 08:23:51.355682] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.263 Malloc0 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.263 [2024-11-28 08:23:51.429102] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.263 [ 00:24:54.263 { 00:24:54.263 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:54.263 "subtype": "Discovery", 00:24:54.263 "listen_addresses": [], 00:24:54.263 "allow_any_host": true, 00:24:54.263 "hosts": [] 00:24:54.263 }, 00:24:54.263 { 00:24:54.263 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:54.263 "subtype": "NVMe", 00:24:54.263 "listen_addresses": [ 00:24:54.263 { 00:24:54.263 "trtype": "TCP", 00:24:54.263 "adrfam": "IPv4", 00:24:54.263 "traddr": "10.0.0.2", 00:24:54.263 "trsvcid": "4420" 00:24:54.263 } 00:24:54.263 ], 00:24:54.263 "allow_any_host": true, 00:24:54.263 "hosts": [], 00:24:54.263 "serial_number": "SPDK00000000000001", 00:24:54.263 "model_number": "SPDK bdev Controller", 00:24:54.263 "max_namespaces": 2, 00:24:54.263 "min_cntlid": 1, 00:24:54.263 "max_cntlid": 65519, 00:24:54.263 "namespaces": [ 00:24:54.263 { 00:24:54.263 "nsid": 1, 00:24:54.263 "bdev_name": "Malloc0", 00:24:54.263 "name": "Malloc0", 00:24:54.263 "nguid": "46B08F8B29DA401084C6659A9A5C237C", 00:24:54.263 "uuid": "46b08f8b-29da-4010-84c6-659a9a5c237c" 00:24:54.263 } 00:24:54.263 ] 00:24:54.263 } 00:24:54.263 ] 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2056442 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:24:54.263 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:54.527 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:54.527 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:24:54.527 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:24:54.527 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:54.527 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:54.527 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:24:54.527 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:24:54.527 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:54.527 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:54.527 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:54.527 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:24:54.527 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:54.527 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.527 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.789 Malloc1 00:24:54.789 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.789 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:54.789 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.789 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.789 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.789 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:54.789 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.789 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.789 Asynchronous Event Request test 00:24:54.789 Attaching to 10.0.0.2 00:24:54.789 Attached to 10.0.0.2 00:24:54.789 Registering asynchronous event callbacks... 00:24:54.789 Starting namespace attribute notice tests for all controllers... 00:24:54.789 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:54.789 aer_cb - Changed Namespace 00:24:54.789 Cleaning up... 00:24:54.789 [ 00:24:54.789 { 00:24:54.789 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:54.789 "subtype": "Discovery", 00:24:54.789 "listen_addresses": [], 00:24:54.789 "allow_any_host": true, 00:24:54.789 "hosts": [] 00:24:54.789 }, 00:24:54.789 { 00:24:54.789 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:54.789 "subtype": "NVMe", 00:24:54.789 "listen_addresses": [ 00:24:54.789 { 00:24:54.789 "trtype": "TCP", 00:24:54.789 "adrfam": "IPv4", 00:24:54.789 "traddr": "10.0.0.2", 00:24:54.789 "trsvcid": "4420" 00:24:54.789 } 00:24:54.789 ], 00:24:54.789 "allow_any_host": true, 00:24:54.789 "hosts": [], 00:24:54.789 "serial_number": "SPDK00000000000001", 00:24:54.789 "model_number": "SPDK bdev Controller", 00:24:54.789 "max_namespaces": 2, 00:24:54.789 "min_cntlid": 1, 00:24:54.789 "max_cntlid": 65519, 00:24:54.789 "namespaces": [ 00:24:54.789 { 00:24:54.789 "nsid": 1, 00:24:54.789 "bdev_name": "Malloc0", 00:24:54.789 "name": "Malloc0", 00:24:54.789 "nguid": "46B08F8B29DA401084C6659A9A5C237C", 00:24:54.789 "uuid": "46b08f8b-29da-4010-84c6-659a9a5c237c" 00:24:54.789 }, 00:24:54.789 { 00:24:54.789 "nsid": 2, 00:24:54.789 "bdev_name": "Malloc1", 00:24:54.789 "name": "Malloc1", 00:24:54.789 "nguid": "82A94EFF502F498484AB571B2FC7A3B5", 00:24:54.789 "uuid": "82a94eff-502f-4984-84ab-571b2fc7a3b5" 00:24:54.789 } 00:24:54.789 ] 00:24:54.789 } 00:24:54.789 ] 00:24:54.789 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.789 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2056442 00:24:54.789 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:54.789 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.789 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.790 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.790 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:54.790 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.790 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.790 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.790 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:54.790 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.790 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.790 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.790 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:54.790 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:54.790 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:54.790 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:24:54.790 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:54.790 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:24:54.790 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:54.790 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:54.790 rmmod nvme_tcp 00:24:54.790 rmmod nvme_fabrics 00:24:54.790 rmmod nvme_keyring 00:24:54.790 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:54.790 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:24:54.790 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:24:54.790 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2056091 ']' 00:24:54.790 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2056091 00:24:54.790 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2056091 ']' 00:24:54.790 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2056091 00:24:54.790 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:24:54.790 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:54.790 08:23:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2056091 00:24:54.790 08:23:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:54.790 08:23:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:54.790 08:23:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2056091' 00:24:54.790 killing process with pid 2056091 00:24:54.790 08:23:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2056091 00:24:54.790 08:23:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2056091 00:24:55.051 08:23:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:55.051 08:23:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:55.051 08:23:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:55.051 08:23:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:24:55.051 08:23:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:24:55.051 08:23:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:55.051 08:23:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:24:55.051 08:23:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:55.051 08:23:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:55.051 08:23:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.051 08:23:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:55.051 08:23:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:57.596 00:24:57.596 real 0m11.607s 00:24:57.596 user 0m8.514s 00:24:57.596 sys 0m6.184s 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:57.596 ************************************ 00:24:57.596 END TEST nvmf_aer 00:24:57.596 ************************************ 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.596 ************************************ 00:24:57.596 START TEST nvmf_async_init 00:24:57.596 ************************************ 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:57.596 * Looking for test storage... 00:24:57.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:57.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.596 --rc genhtml_branch_coverage=1 00:24:57.596 --rc genhtml_function_coverage=1 00:24:57.596 --rc genhtml_legend=1 00:24:57.596 --rc geninfo_all_blocks=1 00:24:57.596 --rc geninfo_unexecuted_blocks=1 00:24:57.596 00:24:57.596 ' 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:57.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.596 --rc genhtml_branch_coverage=1 00:24:57.596 --rc genhtml_function_coverage=1 00:24:57.596 --rc genhtml_legend=1 00:24:57.596 --rc geninfo_all_blocks=1 00:24:57.596 --rc geninfo_unexecuted_blocks=1 00:24:57.596 00:24:57.596 ' 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:57.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.596 --rc genhtml_branch_coverage=1 00:24:57.596 --rc genhtml_function_coverage=1 00:24:57.596 --rc genhtml_legend=1 00:24:57.596 --rc geninfo_all_blocks=1 00:24:57.596 --rc geninfo_unexecuted_blocks=1 00:24:57.596 00:24:57.596 ' 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:57.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.596 --rc genhtml_branch_coverage=1 00:24:57.596 --rc genhtml_function_coverage=1 00:24:57.596 --rc genhtml_legend=1 00:24:57.596 --rc geninfo_all_blocks=1 00:24:57.596 --rc geninfo_unexecuted_blocks=1 00:24:57.596 00:24:57.596 ' 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.596 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.597 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.597 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.597 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:57.597 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.597 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:24:57.597 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:57.597 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:57.597 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:57.597 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.597 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.597 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:57.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:57.597 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:57.597 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:57.597 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:57.597 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:57.597 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:57.597 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:57.597 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:57.597 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:57.597 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:57.597 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=e12766532ef341548e13327b7243f121 00:24:57.597 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:57.597 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:57.597 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.597 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:57.597 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:57.597 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:57.597 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.597 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:57.597 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.597 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:57.597 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:57.597 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:24:57.597 08:23:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:05.737 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:05.737 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:05.737 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:05.737 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:05.737 08:24:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:05.737 08:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:05.737 08:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:05.737 08:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:05.737 08:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:05.737 08:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:05.737 08:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:05.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:05.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.726 ms 00:25:05.737 00:25:05.737 --- 10.0.0.2 ping statistics --- 00:25:05.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.737 rtt min/avg/max/mdev = 0.726/0.726/0.726/0.000 ms 00:25:05.737 08:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:05.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:05.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:25:05.737 00:25:05.737 --- 10.0.0.1 ping statistics --- 00:25:05.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.737 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:25:05.737 08:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:05.737 08:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:25:05.737 08:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:05.738 08:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.738 08:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:05.738 08:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:05.738 08:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:05.738 08:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:05.738 08:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:05.738 08:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:05.738 08:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:05.738 08:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:05.738 08:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:05.738 08:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2060875 00:25:05.738 08:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2060875 00:25:05.738 08:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:05.738 08:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2060875 ']' 00:25:05.738 08:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.738 08:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:05.738 08:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.738 08:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:05.738 08:24:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:05.738 [2024-11-28 08:24:02.266373] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:25:05.738 [2024-11-28 08:24:02.266437] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:05.738 [2024-11-28 08:24:02.367916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.738 [2024-11-28 08:24:02.419139] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:05.738 [2024-11-28 08:24:02.419204] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:05.738 [2024-11-28 08:24:02.419213] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:05.738 [2024-11-28 08:24:02.419226] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:05.738 [2024-11-28 08:24:02.419232] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:05.738 [2024-11-28 08:24:02.419996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.999 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:05.999 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:25:05.999 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:05.999 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:05.999 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:05.999 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:05.999 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:05.999 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.999 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:05.999 [2024-11-28 08:24:03.151414] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:05.999 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.999 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:05.999 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.999 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:05.999 null0 00:25:05.999 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.999 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:05.999 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.999 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:05.999 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.999 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:05.999 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.999 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:05.999 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.999 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g e12766532ef341548e13327b7243f121 00:25:05.999 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.999 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:05.999 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.999 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:05.999 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.999 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:05.999 [2024-11-28 08:24:03.211841] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:05.999 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.999 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:05.999 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.999 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:06.261 nvme0n1 00:25:06.261 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.261 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:06.261 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.261 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:06.261 [ 00:25:06.261 { 00:25:06.261 "name": "nvme0n1", 00:25:06.261 "aliases": [ 00:25:06.261 "e1276653-2ef3-4154-8e13-327b7243f121" 00:25:06.261 ], 00:25:06.261 "product_name": "NVMe disk", 00:25:06.261 "block_size": 512, 00:25:06.261 "num_blocks": 2097152, 00:25:06.261 "uuid": "e1276653-2ef3-4154-8e13-327b7243f121", 00:25:06.261 "numa_id": 0, 00:25:06.261 "assigned_rate_limits": { 00:25:06.261 "rw_ios_per_sec": 0, 00:25:06.261 "rw_mbytes_per_sec": 0, 00:25:06.261 "r_mbytes_per_sec": 0, 00:25:06.261 "w_mbytes_per_sec": 0 00:25:06.261 }, 00:25:06.261 "claimed": false, 00:25:06.261 "zoned": false, 00:25:06.261 "supported_io_types": { 00:25:06.261 "read": true, 00:25:06.261 "write": true, 00:25:06.261 "unmap": false, 00:25:06.261 "flush": true, 00:25:06.261 "reset": true, 00:25:06.261 "nvme_admin": true, 00:25:06.261 "nvme_io": true, 00:25:06.261 "nvme_io_md": false, 00:25:06.261 "write_zeroes": true, 00:25:06.261 "zcopy": false, 00:25:06.261 "get_zone_info": false, 00:25:06.261 "zone_management": false, 00:25:06.261 "zone_append": false, 00:25:06.261 "compare": true, 00:25:06.261 "compare_and_write": true, 00:25:06.261 "abort": true, 00:25:06.261 "seek_hole": false, 00:25:06.261 "seek_data": false, 00:25:06.261 "copy": true, 00:25:06.261 "nvme_iov_md": false 00:25:06.261 }, 00:25:06.261 "memory_domains": [ 00:25:06.261 { 00:25:06.261 "dma_device_id": "system", 00:25:06.261 "dma_device_type": 1 00:25:06.261 } 00:25:06.261 ], 00:25:06.261 "driver_specific": { 00:25:06.261 "nvme": [ 00:25:06.261 { 00:25:06.261 "trid": { 00:25:06.261 "trtype": "TCP", 00:25:06.261 "adrfam": "IPv4", 00:25:06.261 "traddr": "10.0.0.2", 00:25:06.261 "trsvcid": "4420", 00:25:06.261 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:06.261 }, 00:25:06.261 "ctrlr_data": { 00:25:06.261 "cntlid": 1, 00:25:06.261 "vendor_id": "0x8086", 00:25:06.261 "model_number": "SPDK bdev Controller", 00:25:06.261 "serial_number": "00000000000000000000", 00:25:06.261 "firmware_revision": "25.01", 00:25:06.261 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:06.261 "oacs": { 00:25:06.261 "security": 0, 00:25:06.261 "format": 0, 00:25:06.261 "firmware": 0, 00:25:06.261 "ns_manage": 0 00:25:06.261 }, 00:25:06.261 "multi_ctrlr": true, 00:25:06.261 "ana_reporting": false 00:25:06.261 }, 00:25:06.261 "vs": { 00:25:06.261 "nvme_version": "1.3" 00:25:06.261 }, 00:25:06.261 "ns_data": { 00:25:06.261 "id": 1, 00:25:06.261 "can_share": true 00:25:06.261 } 00:25:06.261 } 00:25:06.261 ], 00:25:06.261 "mp_policy": "active_passive" 00:25:06.261 } 00:25:06.261 } 00:25:06.261 ] 00:25:06.261 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.261 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:06.261 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.261 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:06.261 [2024-11-28 08:24:03.485686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:06.261 [2024-11-28 08:24:03.485773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1750ce0 (9): Bad file descriptor 00:25:06.523 [2024-11-28 08:24:03.618270] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:25:06.523 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.523 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:06.523 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.523 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:06.523 [ 00:25:06.523 { 00:25:06.523 "name": "nvme0n1", 00:25:06.523 "aliases": [ 00:25:06.523 "e1276653-2ef3-4154-8e13-327b7243f121" 00:25:06.523 ], 00:25:06.523 "product_name": "NVMe disk", 00:25:06.523 "block_size": 512, 00:25:06.523 "num_blocks": 2097152, 00:25:06.523 "uuid": "e1276653-2ef3-4154-8e13-327b7243f121", 00:25:06.523 "numa_id": 0, 00:25:06.523 "assigned_rate_limits": { 00:25:06.523 "rw_ios_per_sec": 0, 00:25:06.523 "rw_mbytes_per_sec": 0, 00:25:06.523 "r_mbytes_per_sec": 0, 00:25:06.523 "w_mbytes_per_sec": 0 00:25:06.523 }, 00:25:06.523 "claimed": false, 00:25:06.523 "zoned": false, 00:25:06.523 "supported_io_types": { 00:25:06.523 "read": true, 00:25:06.523 "write": true, 00:25:06.523 "unmap": false, 00:25:06.523 "flush": true, 00:25:06.523 "reset": true, 00:25:06.523 "nvme_admin": true, 00:25:06.523 "nvme_io": true, 00:25:06.523 "nvme_io_md": false, 00:25:06.523 "write_zeroes": true, 00:25:06.523 "zcopy": false, 00:25:06.523 "get_zone_info": false, 00:25:06.523 "zone_management": false, 00:25:06.523 "zone_append": false, 00:25:06.523 "compare": true, 00:25:06.523 "compare_and_write": true, 00:25:06.523 "abort": true, 00:25:06.523 "seek_hole": false, 00:25:06.523 "seek_data": false, 00:25:06.523 "copy": true, 00:25:06.523 "nvme_iov_md": false 00:25:06.523 }, 00:25:06.523 "memory_domains": [ 00:25:06.523 { 00:25:06.523 "dma_device_id": "system", 00:25:06.523 "dma_device_type": 1 00:25:06.523 } 00:25:06.523 ], 00:25:06.523 "driver_specific": { 00:25:06.523 "nvme": [ 00:25:06.523 { 00:25:06.523 "trid": { 00:25:06.523 "trtype": "TCP", 00:25:06.523 "adrfam": "IPv4", 00:25:06.523 "traddr": "10.0.0.2", 00:25:06.523 "trsvcid": "4420", 00:25:06.523 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:06.523 }, 00:25:06.523 "ctrlr_data": { 00:25:06.523 "cntlid": 2, 00:25:06.523 "vendor_id": "0x8086", 00:25:06.523 "model_number": "SPDK bdev Controller", 00:25:06.523 "serial_number": "00000000000000000000", 00:25:06.523 "firmware_revision": "25.01", 00:25:06.523 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:06.523 "oacs": { 00:25:06.523 "security": 0, 00:25:06.523 "format": 0, 00:25:06.523 "firmware": 0, 00:25:06.523 "ns_manage": 0 00:25:06.523 }, 00:25:06.523 "multi_ctrlr": true, 00:25:06.523 "ana_reporting": false 00:25:06.523 }, 00:25:06.523 "vs": { 00:25:06.523 "nvme_version": "1.3" 00:25:06.523 }, 00:25:06.523 "ns_data": { 00:25:06.523 "id": 1, 00:25:06.523 "can_share": true 00:25:06.523 } 00:25:06.523 } 00:25:06.523 ], 00:25:06.523 "mp_policy": "active_passive" 00:25:06.523 } 00:25:06.523 } 00:25:06.523 ] 00:25:06.523 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.523 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.523 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.523 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:06.523 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.523 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:25:06.523 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.sovEJA6cIs 00:25:06.523 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:06.523 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.sovEJA6cIs 00:25:06.523 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.sovEJA6cIs 00:25:06.523 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.523 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:06.523 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.523 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:06.523 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.523 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:06.523 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.523 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:06.523 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.523 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:06.523 [2024-11-28 08:24:03.710474] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:06.523 [2024-11-28 08:24:03.710650] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:06.523 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.523 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:25:06.523 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.523 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:06.523 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.523 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:06.523 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.523 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:06.523 [2024-11-28 08:24:03.734549] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:06.523 nvme0n1 00:25:06.523 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.523 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:06.523 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.523 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:06.785 [ 00:25:06.785 { 00:25:06.785 "name": "nvme0n1", 00:25:06.785 "aliases": [ 00:25:06.785 "e1276653-2ef3-4154-8e13-327b7243f121" 00:25:06.785 ], 00:25:06.785 "product_name": "NVMe disk", 00:25:06.785 "block_size": 512, 00:25:06.785 "num_blocks": 2097152, 00:25:06.785 "uuid": "e1276653-2ef3-4154-8e13-327b7243f121", 00:25:06.785 "numa_id": 0, 00:25:06.785 "assigned_rate_limits": { 00:25:06.785 "rw_ios_per_sec": 0, 00:25:06.785 "rw_mbytes_per_sec": 0, 00:25:06.785 "r_mbytes_per_sec": 0, 00:25:06.785 "w_mbytes_per_sec": 0 00:25:06.785 }, 00:25:06.785 "claimed": false, 00:25:06.785 "zoned": false, 00:25:06.785 "supported_io_types": { 00:25:06.785 "read": true, 00:25:06.785 "write": true, 00:25:06.785 "unmap": false, 00:25:06.785 "flush": true, 00:25:06.785 "reset": true, 00:25:06.785 "nvme_admin": true, 00:25:06.785 "nvme_io": true, 00:25:06.785 "nvme_io_md": false, 00:25:06.785 "write_zeroes": true, 00:25:06.785 "zcopy": false, 00:25:06.785 "get_zone_info": false, 00:25:06.785 "zone_management": false, 00:25:06.785 "zone_append": false, 00:25:06.785 "compare": true, 00:25:06.785 "compare_and_write": true, 00:25:06.785 "abort": true, 00:25:06.785 "seek_hole": false, 00:25:06.785 "seek_data": false, 00:25:06.785 "copy": true, 00:25:06.785 "nvme_iov_md": false 00:25:06.785 }, 00:25:06.785 "memory_domains": [ 00:25:06.785 { 00:25:06.785 "dma_device_id": "system", 00:25:06.785 "dma_device_type": 1 00:25:06.785 } 00:25:06.785 ], 00:25:06.785 "driver_specific": { 00:25:06.785 "nvme": [ 00:25:06.785 { 00:25:06.785 "trid": { 00:25:06.785 "trtype": "TCP", 00:25:06.785 "adrfam": "IPv4", 00:25:06.785 "traddr": "10.0.0.2", 00:25:06.785 "trsvcid": "4421", 00:25:06.785 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:06.785 }, 00:25:06.785 "ctrlr_data": { 00:25:06.785 "cntlid": 3, 00:25:06.785 "vendor_id": "0x8086", 00:25:06.785 "model_number": "SPDK bdev Controller", 00:25:06.785 "serial_number": "00000000000000000000", 00:25:06.785 "firmware_revision": "25.01", 00:25:06.785 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:06.785 "oacs": { 00:25:06.785 "security": 0, 00:25:06.785 "format": 0, 00:25:06.785 "firmware": 0, 00:25:06.785 "ns_manage": 0 00:25:06.785 }, 00:25:06.785 "multi_ctrlr": true, 00:25:06.785 "ana_reporting": false 00:25:06.785 }, 00:25:06.785 "vs": { 00:25:06.785 "nvme_version": "1.3" 00:25:06.785 }, 00:25:06.785 "ns_data": { 00:25:06.785 "id": 1, 00:25:06.785 "can_share": true 00:25:06.785 } 00:25:06.785 } 00:25:06.785 ], 00:25:06.785 "mp_policy": "active_passive" 00:25:06.785 } 00:25:06.785 } 00:25:06.785 ] 00:25:06.785 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.785 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.785 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.785 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:06.785 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.785 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.sovEJA6cIs 00:25:06.785 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:25:06.785 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:25:06.785 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:06.785 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:25:06.785 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:06.785 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:25:06.785 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:06.785 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:06.785 rmmod nvme_tcp 00:25:06.785 rmmod nvme_fabrics 00:25:06.785 rmmod nvme_keyring 00:25:06.785 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:06.785 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:25:06.785 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:25:06.785 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2060875 ']' 00:25:06.785 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2060875 00:25:06.785 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2060875 ']' 00:25:06.785 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2060875 00:25:06.785 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:25:06.785 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:06.785 08:24:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2060875 00:25:06.785 08:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:06.785 08:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:06.785 08:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2060875' 00:25:06.785 killing process with pid 2060875 00:25:06.785 08:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2060875 00:25:06.785 08:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2060875 00:25:07.046 08:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:07.046 08:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:07.046 08:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:07.046 08:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:25:07.046 08:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:25:07.046 08:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:07.046 08:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:25:07.046 08:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:07.046 08:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:07.046 08:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.046 08:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:07.046 08:24:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.590 08:24:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:09.590 00:25:09.590 real 0m11.858s 00:25:09.590 user 0m4.307s 00:25:09.590 sys 0m6.144s 00:25:09.590 08:24:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:09.590 08:24:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.590 ************************************ 00:25:09.590 END TEST nvmf_async_init 00:25:09.590 ************************************ 00:25:09.590 08:24:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:09.590 08:24:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:09.590 08:24:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:09.590 08:24:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.590 ************************************ 00:25:09.591 START TEST dma 00:25:09.591 ************************************ 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:09.591 * Looking for test storage... 00:25:09.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:09.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.591 --rc genhtml_branch_coverage=1 00:25:09.591 --rc genhtml_function_coverage=1 00:25:09.591 --rc genhtml_legend=1 00:25:09.591 --rc geninfo_all_blocks=1 00:25:09.591 --rc geninfo_unexecuted_blocks=1 00:25:09.591 00:25:09.591 ' 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:09.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.591 --rc genhtml_branch_coverage=1 00:25:09.591 --rc genhtml_function_coverage=1 00:25:09.591 --rc genhtml_legend=1 00:25:09.591 --rc geninfo_all_blocks=1 00:25:09.591 --rc geninfo_unexecuted_blocks=1 00:25:09.591 00:25:09.591 ' 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:09.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.591 --rc genhtml_branch_coverage=1 00:25:09.591 --rc genhtml_function_coverage=1 00:25:09.591 --rc genhtml_legend=1 00:25:09.591 --rc geninfo_all_blocks=1 00:25:09.591 --rc geninfo_unexecuted_blocks=1 00:25:09.591 00:25:09.591 ' 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:09.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.591 --rc genhtml_branch_coverage=1 00:25:09.591 --rc genhtml_function_coverage=1 00:25:09.591 --rc genhtml_legend=1 00:25:09.591 --rc geninfo_all_blocks=1 00:25:09.591 --rc geninfo_unexecuted_blocks=1 00:25:09.591 00:25:09.591 ' 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:09.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:25:09.591 00:25:09.591 real 0m0.236s 00:25:09.591 user 0m0.147s 00:25:09.591 sys 0m0.103s 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:09.591 08:24:06 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:25:09.591 ************************************ 00:25:09.591 END TEST dma 00:25:09.591 ************************************ 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.592 ************************************ 00:25:09.592 START TEST nvmf_identify 00:25:09.592 ************************************ 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:09.592 * Looking for test storage... 00:25:09.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:09.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.592 --rc genhtml_branch_coverage=1 00:25:09.592 --rc genhtml_function_coverage=1 00:25:09.592 --rc genhtml_legend=1 00:25:09.592 --rc geninfo_all_blocks=1 00:25:09.592 --rc geninfo_unexecuted_blocks=1 00:25:09.592 00:25:09.592 ' 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:09.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.592 --rc genhtml_branch_coverage=1 00:25:09.592 --rc genhtml_function_coverage=1 00:25:09.592 --rc genhtml_legend=1 00:25:09.592 --rc geninfo_all_blocks=1 00:25:09.592 --rc geninfo_unexecuted_blocks=1 00:25:09.592 00:25:09.592 ' 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:09.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.592 --rc genhtml_branch_coverage=1 00:25:09.592 --rc genhtml_function_coverage=1 00:25:09.592 --rc genhtml_legend=1 00:25:09.592 --rc geninfo_all_blocks=1 00:25:09.592 --rc geninfo_unexecuted_blocks=1 00:25:09.592 00:25:09.592 ' 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:09.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:09.592 --rc genhtml_branch_coverage=1 00:25:09.592 --rc genhtml_function_coverage=1 00:25:09.592 --rc genhtml_legend=1 00:25:09.592 --rc geninfo_all_blocks=1 00:25:09.592 --rc geninfo_unexecuted_blocks=1 00:25:09.592 00:25:09.592 ' 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:09.592 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:09.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:25:09.854 08:24:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:17.995 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:17.995 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:17.996 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:17.996 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:17.996 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:17.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:17.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:25:17.996 00:25:17.996 --- 10.0.0.2 ping statistics --- 00:25:17.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.996 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:17.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:17.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.340 ms 00:25:17.996 00:25:17.996 --- 10.0.0.1 ping statistics --- 00:25:17.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.996 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2065920 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2065920 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2065920 ']' 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:17.996 08:24:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:17.996 [2024-11-28 08:24:14.550226] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:25:17.996 [2024-11-28 08:24:14.550298] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:17.996 [2024-11-28 08:24:14.650523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:17.996 [2024-11-28 08:24:14.705459] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:17.996 [2024-11-28 08:24:14.705514] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:17.996 [2024-11-28 08:24:14.705523] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:17.996 [2024-11-28 08:24:14.705531] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:17.996 [2024-11-28 08:24:14.705537] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:17.996 [2024-11-28 08:24:14.707535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:17.996 [2024-11-28 08:24:14.707701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:17.996 [2024-11-28 08:24:14.707860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.996 [2024-11-28 08:24:14.707861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:18.257 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:18.257 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:25:18.257 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:18.257 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.257 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:18.257 [2024-11-28 08:24:15.385295] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:18.257 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.257 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:18.257 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:18.257 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:18.257 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:18.257 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.257 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:18.257 Malloc0 00:25:18.257 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.257 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:18.257 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.257 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:18.257 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.257 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:18.257 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.257 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:18.257 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.257 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:18.257 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.257 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:18.257 [2024-11-28 08:24:15.504961] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:18.257 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.257 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:18.257 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.257 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:18.257 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.257 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:18.257 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.257 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:18.257 [ 00:25:18.257 { 00:25:18.257 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:18.257 "subtype": "Discovery", 00:25:18.257 "listen_addresses": [ 00:25:18.257 { 00:25:18.257 "trtype": "TCP", 00:25:18.257 "adrfam": "IPv4", 00:25:18.257 "traddr": "10.0.0.2", 00:25:18.257 "trsvcid": "4420" 00:25:18.257 } 00:25:18.257 ], 00:25:18.257 "allow_any_host": true, 00:25:18.257 "hosts": [] 00:25:18.257 }, 00:25:18.257 { 00:25:18.257 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:18.257 "subtype": "NVMe", 00:25:18.257 "listen_addresses": [ 00:25:18.257 { 00:25:18.257 "trtype": "TCP", 00:25:18.257 "adrfam": "IPv4", 00:25:18.257 "traddr": "10.0.0.2", 00:25:18.257 "trsvcid": "4420" 00:25:18.257 } 00:25:18.257 ], 00:25:18.257 "allow_any_host": true, 00:25:18.257 "hosts": [], 00:25:18.257 "serial_number": "SPDK00000000000001", 00:25:18.257 "model_number": "SPDK bdev Controller", 00:25:18.257 "max_namespaces": 32, 00:25:18.257 "min_cntlid": 1, 00:25:18.257 "max_cntlid": 65519, 00:25:18.257 "namespaces": [ 00:25:18.257 { 00:25:18.257 "nsid": 1, 00:25:18.257 "bdev_name": "Malloc0", 00:25:18.257 "name": "Malloc0", 00:25:18.257 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:18.257 "eui64": "ABCDEF0123456789", 00:25:18.257 "uuid": "4cc6da8c-d5be-4e4a-8aef-18cc189a10d1" 00:25:18.257 } 00:25:18.257 ] 00:25:18.257 } 00:25:18.257 ] 00:25:18.257 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.257 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:18.523 [2024-11-28 08:24:15.569296] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:25:18.523 [2024-11-28 08:24:15.569344] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2066106 ] 00:25:18.523 [2024-11-28 08:24:15.625852] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:25:18.523 [2024-11-28 08:24:15.625929] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:18.523 [2024-11-28 08:24:15.625935] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:18.523 [2024-11-28 08:24:15.625957] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:18.523 [2024-11-28 08:24:15.625969] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:18.523 [2024-11-28 08:24:15.629590] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:25:18.523 [2024-11-28 08:24:15.629637] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xc35690 0 00:25:18.523 [2024-11-28 08:24:15.637171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:18.523 [2024-11-28 08:24:15.637189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:18.523 [2024-11-28 08:24:15.637194] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:18.523 [2024-11-28 08:24:15.637198] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:18.523 [2024-11-28 08:24:15.637244] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.523 [2024-11-28 08:24:15.637250] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.523 [2024-11-28 08:24:15.637254] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc35690) 00:25:18.523 [2024-11-28 08:24:15.637271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:18.523 [2024-11-28 08:24:15.637297] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc97100, cid 0, qid 0 00:25:18.523 [2024-11-28 08:24:15.645173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.523 [2024-11-28 08:24:15.645184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.523 [2024-11-28 08:24:15.645188] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.523 [2024-11-28 08:24:15.645193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc97100) on tqpair=0xc35690 00:25:18.523 [2024-11-28 08:24:15.645204] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:18.523 [2024-11-28 08:24:15.645213] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:25:18.523 [2024-11-28 08:24:15.645219] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:25:18.523 [2024-11-28 08:24:15.645238] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.523 [2024-11-28 08:24:15.645243] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.523 [2024-11-28 08:24:15.645247] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc35690) 00:25:18.523 [2024-11-28 08:24:15.645256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.523 [2024-11-28 08:24:15.645273] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc97100, cid 0, qid 0 00:25:18.523 [2024-11-28 08:24:15.645491] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.523 [2024-11-28 08:24:15.645498] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.523 [2024-11-28 08:24:15.645502] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.523 [2024-11-28 08:24:15.645506] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc97100) on tqpair=0xc35690 00:25:18.523 [2024-11-28 08:24:15.645514] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:25:18.523 [2024-11-28 08:24:15.645522] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:25:18.523 [2024-11-28 08:24:15.645530] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.523 [2024-11-28 08:24:15.645534] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.523 [2024-11-28 08:24:15.645545] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc35690) 00:25:18.523 [2024-11-28 08:24:15.645553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.523 [2024-11-28 08:24:15.645564] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc97100, cid 0, qid 0 00:25:18.523 [2024-11-28 08:24:15.645709] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.523 [2024-11-28 08:24:15.645715] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.523 [2024-11-28 08:24:15.645719] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.523 [2024-11-28 08:24:15.645723] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc97100) on tqpair=0xc35690 00:25:18.523 [2024-11-28 08:24:15.645728] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:25:18.523 [2024-11-28 08:24:15.645737] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:25:18.523 [2024-11-28 08:24:15.645744] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.523 [2024-11-28 08:24:15.645748] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.523 [2024-11-28 08:24:15.645751] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc35690) 00:25:18.523 [2024-11-28 08:24:15.645758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.523 [2024-11-28 08:24:15.645769] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc97100, cid 0, qid 0 00:25:18.523 [2024-11-28 08:24:15.645991] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.523 [2024-11-28 08:24:15.645997] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.523 [2024-11-28 08:24:15.646001] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.523 [2024-11-28 08:24:15.646005] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc97100) on tqpair=0xc35690 00:25:18.523 [2024-11-28 08:24:15.646010] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:18.523 [2024-11-28 08:24:15.646020] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.523 [2024-11-28 08:24:15.646024] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.523 [2024-11-28 08:24:15.646027] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc35690) 00:25:18.523 [2024-11-28 08:24:15.646034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.523 [2024-11-28 08:24:15.646044] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc97100, cid 0, qid 0 00:25:18.523 [2024-11-28 08:24:15.646207] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.523 [2024-11-28 08:24:15.646214] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.523 [2024-11-28 08:24:15.646218] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.523 [2024-11-28 08:24:15.646222] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc97100) on tqpair=0xc35690 00:25:18.524 [2024-11-28 08:24:15.646226] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:25:18.524 [2024-11-28 08:24:15.646231] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:25:18.524 [2024-11-28 08:24:15.646239] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:18.524 [2024-11-28 08:24:15.646348] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:25:18.524 [2024-11-28 08:24:15.646353] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:18.524 [2024-11-28 08:24:15.646365] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.524 [2024-11-28 08:24:15.646369] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.524 [2024-11-28 08:24:15.646372] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc35690) 00:25:18.524 [2024-11-28 08:24:15.646379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.524 [2024-11-28 08:24:15.646390] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc97100, cid 0, qid 0 00:25:18.524 [2024-11-28 08:24:15.646628] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.524 [2024-11-28 08:24:15.646634] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.524 [2024-11-28 08:24:15.646638] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.524 [2024-11-28 08:24:15.646642] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc97100) on tqpair=0xc35690 00:25:18.524 [2024-11-28 08:24:15.646646] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:18.524 [2024-11-28 08:24:15.646656] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.524 [2024-11-28 08:24:15.646660] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.524 [2024-11-28 08:24:15.646664] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc35690) 00:25:18.524 [2024-11-28 08:24:15.646671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.524 [2024-11-28 08:24:15.646681] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc97100, cid 0, qid 0 00:25:18.524 [2024-11-28 08:24:15.646885] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.524 [2024-11-28 08:24:15.646892] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.524 [2024-11-28 08:24:15.646895] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.524 [2024-11-28 08:24:15.646899] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc97100) on tqpair=0xc35690 00:25:18.524 [2024-11-28 08:24:15.646903] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:18.524 [2024-11-28 08:24:15.646908] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:25:18.524 [2024-11-28 08:24:15.646915] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:25:18.524 [2024-11-28 08:24:15.646924] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:25:18.524 [2024-11-28 08:24:15.646934] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.524 [2024-11-28 08:24:15.646938] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc35690) 00:25:18.524 [2024-11-28 08:24:15.646945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.524 [2024-11-28 08:24:15.646956] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc97100, cid 0, qid 0 00:25:18.524 [2024-11-28 08:24:15.647124] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:18.524 [2024-11-28 08:24:15.647130] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:18.524 [2024-11-28 08:24:15.647134] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:18.524 [2024-11-28 08:24:15.647138] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc35690): datao=0, datal=4096, cccid=0 00:25:18.524 [2024-11-28 08:24:15.647143] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc97100) on tqpair(0xc35690): expected_datao=0, payload_size=4096 00:25:18.524 [2024-11-28 08:24:15.647151] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.524 [2024-11-28 08:24:15.647177] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:18.524 [2024-11-28 08:24:15.647182] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:18.524 [2024-11-28 08:24:15.647312] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.524 [2024-11-28 08:24:15.647318] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.524 [2024-11-28 08:24:15.647322] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.524 [2024-11-28 08:24:15.647326] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc97100) on tqpair=0xc35690 00:25:18.524 [2024-11-28 08:24:15.647334] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:25:18.524 [2024-11-28 08:24:15.647339] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:25:18.524 [2024-11-28 08:24:15.647343] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:25:18.524 [2024-11-28 08:24:15.647349] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:25:18.524 [2024-11-28 08:24:15.647354] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:25:18.524 [2024-11-28 08:24:15.647358] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:25:18.524 [2024-11-28 08:24:15.647366] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:25:18.524 [2024-11-28 08:24:15.647373] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.524 [2024-11-28 08:24:15.647377] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.524 [2024-11-28 08:24:15.647381] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc35690) 00:25:18.524 [2024-11-28 08:24:15.647388] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:18.524 [2024-11-28 08:24:15.647399] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc97100, cid 0, qid 0 00:25:18.524 [2024-11-28 08:24:15.647628] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.524 [2024-11-28 08:24:15.647635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.524 [2024-11-28 08:24:15.647639] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.524 [2024-11-28 08:24:15.647643] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc97100) on tqpair=0xc35690 00:25:18.524 [2024-11-28 08:24:15.647650] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.524 [2024-11-28 08:24:15.647654] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.524 [2024-11-28 08:24:15.647658] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc35690) 00:25:18.524 [2024-11-28 08:24:15.647664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.524 [2024-11-28 08:24:15.647671] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.524 [2024-11-28 08:24:15.647674] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.524 [2024-11-28 08:24:15.647678] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xc35690) 00:25:18.524 [2024-11-28 08:24:15.647684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.524 [2024-11-28 08:24:15.647690] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.524 [2024-11-28 08:24:15.647694] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.524 [2024-11-28 08:24:15.647697] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xc35690) 00:25:18.524 [2024-11-28 08:24:15.647706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.524 [2024-11-28 08:24:15.647713] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.524 [2024-11-28 08:24:15.647716] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.524 [2024-11-28 08:24:15.647720] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc35690) 00:25:18.524 [2024-11-28 08:24:15.647726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.524 [2024-11-28 08:24:15.647730] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:18.524 [2024-11-28 08:24:15.647741] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:18.524 [2024-11-28 08:24:15.647748] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.524 [2024-11-28 08:24:15.647752] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc35690) 00:25:18.524 [2024-11-28 08:24:15.647759] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.524 [2024-11-28 08:24:15.647771] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc97100, cid 0, qid 0 00:25:18.524 [2024-11-28 08:24:15.647777] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc97280, cid 1, qid 0 00:25:18.524 [2024-11-28 08:24:15.647782] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc97400, cid 2, qid 0 00:25:18.524 [2024-11-28 08:24:15.647786] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc97580, cid 3, qid 0 00:25:18.524 [2024-11-28 08:24:15.647791] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc97700, cid 4, qid 0 00:25:18.524 [2024-11-28 08:24:15.648094] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.524 [2024-11-28 08:24:15.648100] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.524 [2024-11-28 08:24:15.648103] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.524 [2024-11-28 08:24:15.648107] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc97700) on tqpair=0xc35690 00:25:18.524 [2024-11-28 08:24:15.648113] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:25:18.524 [2024-11-28 08:24:15.648118] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:25:18.524 [2024-11-28 08:24:15.648128] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.524 [2024-11-28 08:24:15.648132] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc35690) 00:25:18.524 [2024-11-28 08:24:15.648139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.524 [2024-11-28 08:24:15.648149] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc97700, cid 4, qid 0 00:25:18.524 [2024-11-28 08:24:15.648296] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:18.525 [2024-11-28 08:24:15.648303] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:18.525 [2024-11-28 08:24:15.648307] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:18.525 [2024-11-28 08:24:15.648310] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc35690): datao=0, datal=4096, cccid=4 00:25:18.525 [2024-11-28 08:24:15.648315] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc97700) on tqpair(0xc35690): expected_datao=0, payload_size=4096 00:25:18.525 [2024-11-28 08:24:15.648319] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.525 [2024-11-28 08:24:15.648366] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:18.525 [2024-11-28 08:24:15.648370] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:18.525 [2024-11-28 08:24:15.648514] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.525 [2024-11-28 08:24:15.648520] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.525 [2024-11-28 08:24:15.648524] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.525 [2024-11-28 08:24:15.648528] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc97700) on tqpair=0xc35690 00:25:18.525 [2024-11-28 08:24:15.648541] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:25:18.525 [2024-11-28 08:24:15.648567] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.525 [2024-11-28 08:24:15.648572] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc35690) 00:25:18.525 [2024-11-28 08:24:15.648579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.525 [2024-11-28 08:24:15.648586] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.525 [2024-11-28 08:24:15.648590] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.525 [2024-11-28 08:24:15.648593] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc35690) 00:25:18.525 [2024-11-28 08:24:15.648600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.525 [2024-11-28 08:24:15.648614] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc97700, cid 4, qid 0 00:25:18.525 [2024-11-28 08:24:15.648620] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc97880, cid 5, qid 0 00:25:18.525 [2024-11-28 08:24:15.648887] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:18.525 [2024-11-28 08:24:15.648893] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:18.525 [2024-11-28 08:24:15.648897] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:18.525 [2024-11-28 08:24:15.648900] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc35690): datao=0, datal=1024, cccid=4 00:25:18.525 [2024-11-28 08:24:15.648905] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc97700) on tqpair(0xc35690): expected_datao=0, payload_size=1024 00:25:18.525 [2024-11-28 08:24:15.648909] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.525 [2024-11-28 08:24:15.648916] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:18.525 [2024-11-28 08:24:15.648920] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:18.525 [2024-11-28 08:24:15.648925] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.525 [2024-11-28 08:24:15.648931] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.525 [2024-11-28 08:24:15.648934] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.525 [2024-11-28 08:24:15.648938] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc97880) on tqpair=0xc35690 00:25:18.525 [2024-11-28 08:24:15.693174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.525 [2024-11-28 08:24:15.693188] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.525 [2024-11-28 08:24:15.693191] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.525 [2024-11-28 08:24:15.693196] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc97700) on tqpair=0xc35690 00:25:18.525 [2024-11-28 08:24:15.693209] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.525 [2024-11-28 08:24:15.693215] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc35690) 00:25:18.525 [2024-11-28 08:24:15.693223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.525 [2024-11-28 08:24:15.693240] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc97700, cid 4, qid 0 00:25:18.525 [2024-11-28 08:24:15.693443] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:18.525 [2024-11-28 08:24:15.693451] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:18.525 [2024-11-28 08:24:15.693461] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:18.525 [2024-11-28 08:24:15.693465] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc35690): datao=0, datal=3072, cccid=4 00:25:18.525 [2024-11-28 08:24:15.693470] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc97700) on tqpair(0xc35690): expected_datao=0, payload_size=3072 00:25:18.525 [2024-11-28 08:24:15.693474] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.525 [2024-11-28 08:24:15.693481] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:18.525 [2024-11-28 08:24:15.693485] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:18.525 [2024-11-28 08:24:15.693668] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.525 [2024-11-28 08:24:15.693675] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.525 [2024-11-28 08:24:15.693681] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.525 [2024-11-28 08:24:15.693686] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc97700) on tqpair=0xc35690 00:25:18.525 [2024-11-28 08:24:15.693695] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.525 [2024-11-28 08:24:15.693699] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc35690) 00:25:18.525 [2024-11-28 08:24:15.693706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.525 [2024-11-28 08:24:15.693720] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc97700, cid 4, qid 0 00:25:18.525 [2024-11-28 08:24:15.693967] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:18.525 [2024-11-28 08:24:15.693975] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:18.525 [2024-11-28 08:24:15.693978] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:18.525 [2024-11-28 08:24:15.693982] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc35690): datao=0, datal=8, cccid=4 00:25:18.525 [2024-11-28 08:24:15.693986] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc97700) on tqpair(0xc35690): expected_datao=0, payload_size=8 00:25:18.525 [2024-11-28 08:24:15.693990] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.525 [2024-11-28 08:24:15.693997] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:18.525 [2024-11-28 08:24:15.694001] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:18.525 [2024-11-28 08:24:15.734377] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.525 [2024-11-28 08:24:15.734389] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.525 [2024-11-28 08:24:15.734393] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.525 [2024-11-28 08:24:15.734397] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc97700) on tqpair=0xc35690 00:25:18.525 ===================================================== 00:25:18.525 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:18.525 ===================================================== 00:25:18.525 Controller Capabilities/Features 00:25:18.525 ================================ 00:25:18.525 Vendor ID: 0000 00:25:18.525 Subsystem Vendor ID: 0000 00:25:18.525 Serial Number: .................... 00:25:18.525 Model Number: ........................................ 00:25:18.525 Firmware Version: 25.01 00:25:18.525 Recommended Arb Burst: 0 00:25:18.525 IEEE OUI Identifier: 00 00 00 00:25:18.525 Multi-path I/O 00:25:18.525 May have multiple subsystem ports: No 00:25:18.525 May have multiple controllers: No 00:25:18.525 Associated with SR-IOV VF: No 00:25:18.525 Max Data Transfer Size: 131072 00:25:18.525 Max Number of Namespaces: 0 00:25:18.525 Max Number of I/O Queues: 1024 00:25:18.525 NVMe Specification Version (VS): 1.3 00:25:18.525 NVMe Specification Version (Identify): 1.3 00:25:18.525 Maximum Queue Entries: 128 00:25:18.525 Contiguous Queues Required: Yes 00:25:18.525 Arbitration Mechanisms Supported 00:25:18.525 Weighted Round Robin: Not Supported 00:25:18.525 Vendor Specific: Not Supported 00:25:18.525 Reset Timeout: 15000 ms 00:25:18.525 Doorbell Stride: 4 bytes 00:25:18.525 NVM Subsystem Reset: Not Supported 00:25:18.525 Command Sets Supported 00:25:18.525 NVM Command Set: Supported 00:25:18.525 Boot Partition: Not Supported 00:25:18.525 Memory Page Size Minimum: 4096 bytes 00:25:18.525 Memory Page Size Maximum: 4096 bytes 00:25:18.525 Persistent Memory Region: Not Supported 00:25:18.525 Optional Asynchronous Events Supported 00:25:18.525 Namespace Attribute Notices: Not Supported 00:25:18.525 Firmware Activation Notices: Not Supported 00:25:18.525 ANA Change Notices: Not Supported 00:25:18.525 PLE Aggregate Log Change Notices: Not Supported 00:25:18.525 LBA Status Info Alert Notices: Not Supported 00:25:18.525 EGE Aggregate Log Change Notices: Not Supported 00:25:18.525 Normal NVM Subsystem Shutdown event: Not Supported 00:25:18.525 Zone Descriptor Change Notices: Not Supported 00:25:18.525 Discovery Log Change Notices: Supported 00:25:18.525 Controller Attributes 00:25:18.525 128-bit Host Identifier: Not Supported 00:25:18.525 Non-Operational Permissive Mode: Not Supported 00:25:18.525 NVM Sets: Not Supported 00:25:18.525 Read Recovery Levels: Not Supported 00:25:18.525 Endurance Groups: Not Supported 00:25:18.526 Predictable Latency Mode: Not Supported 00:25:18.526 Traffic Based Keep ALive: Not Supported 00:25:18.526 Namespace Granularity: Not Supported 00:25:18.526 SQ Associations: Not Supported 00:25:18.526 UUID List: Not Supported 00:25:18.526 Multi-Domain Subsystem: Not Supported 00:25:18.526 Fixed Capacity Management: Not Supported 00:25:18.526 Variable Capacity Management: Not Supported 00:25:18.526 Delete Endurance Group: Not Supported 00:25:18.526 Delete NVM Set: Not Supported 00:25:18.526 Extended LBA Formats Supported: Not Supported 00:25:18.526 Flexible Data Placement Supported: Not Supported 00:25:18.526 00:25:18.526 Controller Memory Buffer Support 00:25:18.526 ================================ 00:25:18.526 Supported: No 00:25:18.526 00:25:18.526 Persistent Memory Region Support 00:25:18.526 ================================ 00:25:18.526 Supported: No 00:25:18.526 00:25:18.526 Admin Command Set Attributes 00:25:18.526 ============================ 00:25:18.526 Security Send/Receive: Not Supported 00:25:18.526 Format NVM: Not Supported 00:25:18.526 Firmware Activate/Download: Not Supported 00:25:18.526 Namespace Management: Not Supported 00:25:18.526 Device Self-Test: Not Supported 00:25:18.526 Directives: Not Supported 00:25:18.526 NVMe-MI: Not Supported 00:25:18.526 Virtualization Management: Not Supported 00:25:18.526 Doorbell Buffer Config: Not Supported 00:25:18.526 Get LBA Status Capability: Not Supported 00:25:18.526 Command & Feature Lockdown Capability: Not Supported 00:25:18.526 Abort Command Limit: 1 00:25:18.526 Async Event Request Limit: 4 00:25:18.526 Number of Firmware Slots: N/A 00:25:18.526 Firmware Slot 1 Read-Only: N/A 00:25:18.526 Firmware Activation Without Reset: N/A 00:25:18.526 Multiple Update Detection Support: N/A 00:25:18.526 Firmware Update Granularity: No Information Provided 00:25:18.526 Per-Namespace SMART Log: No 00:25:18.526 Asymmetric Namespace Access Log Page: Not Supported 00:25:18.526 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:18.526 Command Effects Log Page: Not Supported 00:25:18.526 Get Log Page Extended Data: Supported 00:25:18.526 Telemetry Log Pages: Not Supported 00:25:18.526 Persistent Event Log Pages: Not Supported 00:25:18.526 Supported Log Pages Log Page: May Support 00:25:18.526 Commands Supported & Effects Log Page: Not Supported 00:25:18.526 Feature Identifiers & Effects Log Page:May Support 00:25:18.526 NVMe-MI Commands & Effects Log Page: May Support 00:25:18.526 Data Area 4 for Telemetry Log: Not Supported 00:25:18.526 Error Log Page Entries Supported: 128 00:25:18.526 Keep Alive: Not Supported 00:25:18.526 00:25:18.526 NVM Command Set Attributes 00:25:18.526 ========================== 00:25:18.526 Submission Queue Entry Size 00:25:18.526 Max: 1 00:25:18.526 Min: 1 00:25:18.526 Completion Queue Entry Size 00:25:18.526 Max: 1 00:25:18.526 Min: 1 00:25:18.526 Number of Namespaces: 0 00:25:18.526 Compare Command: Not Supported 00:25:18.526 Write Uncorrectable Command: Not Supported 00:25:18.526 Dataset Management Command: Not Supported 00:25:18.526 Write Zeroes Command: Not Supported 00:25:18.526 Set Features Save Field: Not Supported 00:25:18.526 Reservations: Not Supported 00:25:18.526 Timestamp: Not Supported 00:25:18.526 Copy: Not Supported 00:25:18.526 Volatile Write Cache: Not Present 00:25:18.526 Atomic Write Unit (Normal): 1 00:25:18.526 Atomic Write Unit (PFail): 1 00:25:18.526 Atomic Compare & Write Unit: 1 00:25:18.526 Fused Compare & Write: Supported 00:25:18.526 Scatter-Gather List 00:25:18.526 SGL Command Set: Supported 00:25:18.526 SGL Keyed: Supported 00:25:18.526 SGL Bit Bucket Descriptor: Not Supported 00:25:18.526 SGL Metadata Pointer: Not Supported 00:25:18.526 Oversized SGL: Not Supported 00:25:18.526 SGL Metadata Address: Not Supported 00:25:18.526 SGL Offset: Supported 00:25:18.526 Transport SGL Data Block: Not Supported 00:25:18.526 Replay Protected Memory Block: Not Supported 00:25:18.526 00:25:18.526 Firmware Slot Information 00:25:18.526 ========================= 00:25:18.526 Active slot: 0 00:25:18.526 00:25:18.526 00:25:18.526 Error Log 00:25:18.526 ========= 00:25:18.526 00:25:18.526 Active Namespaces 00:25:18.526 ================= 00:25:18.526 Discovery Log Page 00:25:18.526 ================== 00:25:18.526 Generation Counter: 2 00:25:18.526 Number of Records: 2 00:25:18.526 Record Format: 0 00:25:18.526 00:25:18.526 Discovery Log Entry 0 00:25:18.526 ---------------------- 00:25:18.526 Transport Type: 3 (TCP) 00:25:18.526 Address Family: 1 (IPv4) 00:25:18.526 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:18.526 Entry Flags: 00:25:18.526 Duplicate Returned Information: 1 00:25:18.526 Explicit Persistent Connection Support for Discovery: 1 00:25:18.526 Transport Requirements: 00:25:18.526 Secure Channel: Not Required 00:25:18.526 Port ID: 0 (0x0000) 00:25:18.526 Controller ID: 65535 (0xffff) 00:25:18.526 Admin Max SQ Size: 128 00:25:18.526 Transport Service Identifier: 4420 00:25:18.526 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:18.526 Transport Address: 10.0.0.2 00:25:18.526 Discovery Log Entry 1 00:25:18.526 ---------------------- 00:25:18.526 Transport Type: 3 (TCP) 00:25:18.526 Address Family: 1 (IPv4) 00:25:18.526 Subsystem Type: 2 (NVM Subsystem) 00:25:18.526 Entry Flags: 00:25:18.526 Duplicate Returned Information: 0 00:25:18.526 Explicit Persistent Connection Support for Discovery: 0 00:25:18.526 Transport Requirements: 00:25:18.526 Secure Channel: Not Required 00:25:18.526 Port ID: 0 (0x0000) 00:25:18.526 Controller ID: 65535 (0xffff) 00:25:18.526 Admin Max SQ Size: 128 00:25:18.526 Transport Service Identifier: 4420 00:25:18.526 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:18.526 Transport Address: 10.0.0.2 [2024-11-28 08:24:15.734501] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:25:18.526 [2024-11-28 08:24:15.734513] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc97100) on tqpair=0xc35690 00:25:18.526 [2024-11-28 08:24:15.734520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.526 [2024-11-28 08:24:15.734526] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc97280) on tqpair=0xc35690 00:25:18.526 [2024-11-28 08:24:15.734531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.526 [2024-11-28 08:24:15.734536] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc97400) on tqpair=0xc35690 00:25:18.526 [2024-11-28 08:24:15.734540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.526 [2024-11-28 08:24:15.734545] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc97580) on tqpair=0xc35690 00:25:18.526 [2024-11-28 08:24:15.734550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.526 [2024-11-28 08:24:15.734562] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.526 [2024-11-28 08:24:15.734566] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.526 [2024-11-28 08:24:15.734569] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc35690) 00:25:18.526 [2024-11-28 08:24:15.734578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.526 [2024-11-28 08:24:15.734594] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc97580, cid 3, qid 0 00:25:18.526 [2024-11-28 08:24:15.734855] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.526 [2024-11-28 08:24:15.734862] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.526 [2024-11-28 08:24:15.734865] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.526 [2024-11-28 08:24:15.734869] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc97580) on tqpair=0xc35690 00:25:18.526 [2024-11-28 08:24:15.734877] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.526 [2024-11-28 08:24:15.734881] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.526 [2024-11-28 08:24:15.734884] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc35690) 00:25:18.526 [2024-11-28 08:24:15.734891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.526 [2024-11-28 08:24:15.734904] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc97580, cid 3, qid 0 00:25:18.526 [2024-11-28 08:24:15.735105] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.526 [2024-11-28 08:24:15.735111] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.526 [2024-11-28 08:24:15.735114] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.526 [2024-11-28 08:24:15.735118] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc97580) on tqpair=0xc35690 00:25:18.526 [2024-11-28 08:24:15.735124] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:25:18.526 [2024-11-28 08:24:15.735129] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:25:18.526 [2024-11-28 08:24:15.735138] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.526 [2024-11-28 08:24:15.735142] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.527 [2024-11-28 08:24:15.735146] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc35690) 00:25:18.527 [2024-11-28 08:24:15.735153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.527 [2024-11-28 08:24:15.735170] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc97580, cid 3, qid 0 00:25:18.527 [2024-11-28 08:24:15.735327] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.527 [2024-11-28 08:24:15.735333] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.527 [2024-11-28 08:24:15.735336] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.527 [2024-11-28 08:24:15.735340] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc97580) on tqpair=0xc35690 00:25:18.527 [2024-11-28 08:24:15.735350] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.527 [2024-11-28 08:24:15.735354] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.527 [2024-11-28 08:24:15.735358] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc35690) 00:25:18.527 [2024-11-28 08:24:15.735365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.527 [2024-11-28 08:24:15.735376] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc97580, cid 3, qid 0 00:25:18.527 [2024-11-28 08:24:15.735565] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.527 [2024-11-28 08:24:15.735571] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.527 [2024-11-28 08:24:15.735577] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.527 [2024-11-28 08:24:15.735581] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc97580) on tqpair=0xc35690 00:25:18.527 [2024-11-28 08:24:15.735591] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.527 [2024-11-28 08:24:15.735595] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.527 [2024-11-28 08:24:15.735598] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc35690) 00:25:18.527 [2024-11-28 08:24:15.735605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.527 [2024-11-28 08:24:15.735616] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc97580, cid 3, qid 0 00:25:18.527 [2024-11-28 08:24:15.735793] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.527 [2024-11-28 08:24:15.735799] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.527 [2024-11-28 08:24:15.735803] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.527 [2024-11-28 08:24:15.735807] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc97580) on tqpair=0xc35690 00:25:18.527 [2024-11-28 08:24:15.735817] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.527 [2024-11-28 08:24:15.735821] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.527 [2024-11-28 08:24:15.735824] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc35690) 00:25:18.527 [2024-11-28 08:24:15.735831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.527 [2024-11-28 08:24:15.735842] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc97580, cid 3, qid 0 00:25:18.527 [2024-11-28 08:24:15.735982] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.527 [2024-11-28 08:24:15.735988] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.527 [2024-11-28 08:24:15.735992] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.527 [2024-11-28 08:24:15.735996] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc97580) on tqpair=0xc35690 00:25:18.527 [2024-11-28 08:24:15.736006] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.527 [2024-11-28 08:24:15.736010] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.527 [2024-11-28 08:24:15.736014] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc35690) 00:25:18.527 [2024-11-28 08:24:15.736021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.527 [2024-11-28 08:24:15.736031] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc97580, cid 3, qid 0 00:25:18.527 [2024-11-28 08:24:15.736275] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.527 [2024-11-28 08:24:15.736282] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.527 [2024-11-28 08:24:15.736285] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.527 [2024-11-28 08:24:15.736289] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc97580) on tqpair=0xc35690 00:25:18.527 [2024-11-28 08:24:15.736298] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.527 [2024-11-28 08:24:15.736302] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.527 [2024-11-28 08:24:15.736306] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc35690) 00:25:18.527 [2024-11-28 08:24:15.736313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.527 [2024-11-28 08:24:15.736324] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc97580, cid 3, qid 0 00:25:18.527 [2024-11-28 08:24:15.736529] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.527 [2024-11-28 08:24:15.736536] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.527 [2024-11-28 08:24:15.736539] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.527 [2024-11-28 08:24:15.736545] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc97580) on tqpair=0xc35690 00:25:18.527 [2024-11-28 08:24:15.736556] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.527 [2024-11-28 08:24:15.736560] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.527 [2024-11-28 08:24:15.736564] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc35690) 00:25:18.527 [2024-11-28 08:24:15.736570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.527 [2024-11-28 08:24:15.736582] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc97580, cid 3, qid 0 00:25:18.527 [2024-11-28 08:24:15.736715] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.527 [2024-11-28 08:24:15.736721] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.527 [2024-11-28 08:24:15.736724] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.527 [2024-11-28 08:24:15.736728] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc97580) on tqpair=0xc35690 00:25:18.527 [2024-11-28 08:24:15.736738] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.527 [2024-11-28 08:24:15.736742] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.527 [2024-11-28 08:24:15.736745] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc35690) 00:25:18.527 [2024-11-28 08:24:15.736752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.527 [2024-11-28 08:24:15.736762] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc97580, cid 3, qid 0 00:25:18.527 [2024-11-28 08:24:15.736941] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.527 [2024-11-28 08:24:15.736947] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.527 [2024-11-28 08:24:15.736950] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.527 [2024-11-28 08:24:15.736954] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc97580) on tqpair=0xc35690 00:25:18.527 [2024-11-28 08:24:15.736965] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.527 [2024-11-28 08:24:15.736969] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.527 [2024-11-28 08:24:15.736972] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc35690) 00:25:18.527 [2024-11-28 08:24:15.736979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.527 [2024-11-28 08:24:15.736989] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc97580, cid 3, qid 0 00:25:18.527 [2024-11-28 08:24:15.741170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.527 [2024-11-28 08:24:15.741179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.527 [2024-11-28 08:24:15.741182] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.527 [2024-11-28 08:24:15.741186] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc97580) on tqpair=0xc35690 00:25:18.527 [2024-11-28 08:24:15.741196] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.527 [2024-11-28 08:24:15.741200] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.527 [2024-11-28 08:24:15.741204] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc35690) 00:25:18.527 [2024-11-28 08:24:15.741211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.527 [2024-11-28 08:24:15.741223] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc97580, cid 3, qid 0 00:25:18.527 [2024-11-28 08:24:15.741408] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.527 [2024-11-28 08:24:15.741414] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.527 [2024-11-28 08:24:15.741418] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.527 [2024-11-28 08:24:15.741422] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xc97580) on tqpair=0xc35690 00:25:18.527 [2024-11-28 08:24:15.741432] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:25:18.527 00:25:18.527 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:18.527 [2024-11-28 08:24:15.789671] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:25:18.527 [2024-11-28 08:24:15.789724] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2066112 ] 00:25:18.795 [2024-11-28 08:24:15.844697] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:25:18.795 [2024-11-28 08:24:15.844757] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:18.795 [2024-11-28 08:24:15.844762] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:18.795 [2024-11-28 08:24:15.844782] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:18.795 [2024-11-28 08:24:15.844793] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:18.795 [2024-11-28 08:24:15.848458] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:25:18.795 [2024-11-28 08:24:15.848497] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xb55690 0 00:25:18.795 [2024-11-28 08:24:15.856174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:18.795 [2024-11-28 08:24:15.856190] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:18.795 [2024-11-28 08:24:15.856194] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:18.795 [2024-11-28 08:24:15.856198] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:18.795 [2024-11-28 08:24:15.856236] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.795 [2024-11-28 08:24:15.856243] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.795 [2024-11-28 08:24:15.856247] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb55690) 00:25:18.795 [2024-11-28 08:24:15.856261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:18.795 [2024-11-28 08:24:15.856285] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb7100, cid 0, qid 0 00:25:18.795 [2024-11-28 08:24:15.864173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.795 [2024-11-28 08:24:15.864185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.795 [2024-11-28 08:24:15.864190] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.795 [2024-11-28 08:24:15.864195] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb7100) on tqpair=0xb55690 00:25:18.795 [2024-11-28 08:24:15.864204] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:18.795 [2024-11-28 08:24:15.864213] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:25:18.795 [2024-11-28 08:24:15.864219] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:25:18.795 [2024-11-28 08:24:15.864236] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.795 [2024-11-28 08:24:15.864240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.795 [2024-11-28 08:24:15.864244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb55690) 00:25:18.795 [2024-11-28 08:24:15.864253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.795 [2024-11-28 08:24:15.864274] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb7100, cid 0, qid 0 00:25:18.795 [2024-11-28 08:24:15.864513] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.795 [2024-11-28 08:24:15.864519] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.795 [2024-11-28 08:24:15.864522] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.795 [2024-11-28 08:24:15.864526] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb7100) on tqpair=0xb55690 00:25:18.795 [2024-11-28 08:24:15.864534] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:25:18.795 [2024-11-28 08:24:15.864542] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:25:18.795 [2024-11-28 08:24:15.864549] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.795 [2024-11-28 08:24:15.864553] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.795 [2024-11-28 08:24:15.864557] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb55690) 00:25:18.795 [2024-11-28 08:24:15.864564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.795 [2024-11-28 08:24:15.864575] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb7100, cid 0, qid 0 00:25:18.795 [2024-11-28 08:24:15.864788] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.795 [2024-11-28 08:24:15.864796] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.795 [2024-11-28 08:24:15.864802] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.795 [2024-11-28 08:24:15.864806] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb7100) on tqpair=0xb55690 00:25:18.795 [2024-11-28 08:24:15.864812] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:25:18.795 [2024-11-28 08:24:15.864820] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:25:18.795 [2024-11-28 08:24:15.864827] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.795 [2024-11-28 08:24:15.864831] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.795 [2024-11-28 08:24:15.864838] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb55690) 00:25:18.795 [2024-11-28 08:24:15.864846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.795 [2024-11-28 08:24:15.864857] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb7100, cid 0, qid 0 00:25:18.795 [2024-11-28 08:24:15.865047] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.795 [2024-11-28 08:24:15.865055] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.795 [2024-11-28 08:24:15.865059] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.795 [2024-11-28 08:24:15.865062] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb7100) on tqpair=0xb55690 00:25:18.796 [2024-11-28 08:24:15.865067] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:18.796 [2024-11-28 08:24:15.865078] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.796 [2024-11-28 08:24:15.865081] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.796 [2024-11-28 08:24:15.865085] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb55690) 00:25:18.796 [2024-11-28 08:24:15.865094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.796 [2024-11-28 08:24:15.865105] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb7100, cid 0, qid 0 00:25:18.796 [2024-11-28 08:24:15.865310] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.796 [2024-11-28 08:24:15.865320] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.796 [2024-11-28 08:24:15.865332] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.796 [2024-11-28 08:24:15.865339] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb7100) on tqpair=0xb55690 00:25:18.796 [2024-11-28 08:24:15.865345] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:25:18.796 [2024-11-28 08:24:15.865352] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:25:18.796 [2024-11-28 08:24:15.865361] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:18.796 [2024-11-28 08:24:15.865471] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:25:18.796 [2024-11-28 08:24:15.865476] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:18.796 [2024-11-28 08:24:15.865484] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.796 [2024-11-28 08:24:15.865488] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.796 [2024-11-28 08:24:15.865491] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb55690) 00:25:18.796 [2024-11-28 08:24:15.865498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.796 [2024-11-28 08:24:15.865509] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb7100, cid 0, qid 0 00:25:18.796 [2024-11-28 08:24:15.865722] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.796 [2024-11-28 08:24:15.865730] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.796 [2024-11-28 08:24:15.865733] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.796 [2024-11-28 08:24:15.865737] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb7100) on tqpair=0xb55690 00:25:18.796 [2024-11-28 08:24:15.865742] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:18.796 [2024-11-28 08:24:15.865751] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.796 [2024-11-28 08:24:15.865755] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.796 [2024-11-28 08:24:15.865759] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb55690) 00:25:18.796 [2024-11-28 08:24:15.865766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.796 [2024-11-28 08:24:15.865776] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb7100, cid 0, qid 0 00:25:18.796 [2024-11-28 08:24:15.865967] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.796 [2024-11-28 08:24:15.865973] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.796 [2024-11-28 08:24:15.865977] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.796 [2024-11-28 08:24:15.865980] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb7100) on tqpair=0xb55690 00:25:18.796 [2024-11-28 08:24:15.865985] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:18.796 [2024-11-28 08:24:15.865990] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:25:18.796 [2024-11-28 08:24:15.865998] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:25:18.796 [2024-11-28 08:24:15.866006] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:25:18.796 [2024-11-28 08:24:15.866015] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.796 [2024-11-28 08:24:15.866024] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb55690) 00:25:18.796 [2024-11-28 08:24:15.866031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.796 [2024-11-28 08:24:15.866042] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb7100, cid 0, qid 0 00:25:18.796 [2024-11-28 08:24:15.866335] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:18.796 [2024-11-28 08:24:15.866343] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:18.796 [2024-11-28 08:24:15.866346] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:18.796 [2024-11-28 08:24:15.866350] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb55690): datao=0, datal=4096, cccid=0 00:25:18.796 [2024-11-28 08:24:15.866355] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbb7100) on tqpair(0xb55690): expected_datao=0, payload_size=4096 00:25:18.796 [2024-11-28 08:24:15.866360] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.796 [2024-11-28 08:24:15.866373] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:18.796 [2024-11-28 08:24:15.866377] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:18.796 [2024-11-28 08:24:15.866546] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.796 [2024-11-28 08:24:15.866552] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.796 [2024-11-28 08:24:15.866556] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.796 [2024-11-28 08:24:15.866560] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb7100) on tqpair=0xb55690 00:25:18.796 [2024-11-28 08:24:15.866568] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:25:18.796 [2024-11-28 08:24:15.866573] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:25:18.796 [2024-11-28 08:24:15.866578] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:25:18.796 [2024-11-28 08:24:15.866582] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:25:18.796 [2024-11-28 08:24:15.866587] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:25:18.796 [2024-11-28 08:24:15.866591] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:25:18.796 [2024-11-28 08:24:15.866600] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:25:18.796 [2024-11-28 08:24:15.866606] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.796 [2024-11-28 08:24:15.866611] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.796 [2024-11-28 08:24:15.866614] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb55690) 00:25:18.796 [2024-11-28 08:24:15.866621] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:18.796 [2024-11-28 08:24:15.866633] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb7100, cid 0, qid 0 00:25:18.796 [2024-11-28 08:24:15.866824] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.796 [2024-11-28 08:24:15.866830] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.796 [2024-11-28 08:24:15.866834] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.796 [2024-11-28 08:24:15.866838] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb7100) on tqpair=0xb55690 00:25:18.796 [2024-11-28 08:24:15.866845] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.796 [2024-11-28 08:24:15.866849] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.796 [2024-11-28 08:24:15.866852] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb55690) 00:25:18.796 [2024-11-28 08:24:15.866861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.796 [2024-11-28 08:24:15.866869] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.796 [2024-11-28 08:24:15.866872] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.796 [2024-11-28 08:24:15.866876] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xb55690) 00:25:18.796 [2024-11-28 08:24:15.866882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.796 [2024-11-28 08:24:15.866888] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.796 [2024-11-28 08:24:15.866891] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.796 [2024-11-28 08:24:15.866895] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xb55690) 00:25:18.796 [2024-11-28 08:24:15.866901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.796 [2024-11-28 08:24:15.866907] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.796 [2024-11-28 08:24:15.866911] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.796 [2024-11-28 08:24:15.866914] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb55690) 00:25:18.796 [2024-11-28 08:24:15.866920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.796 [2024-11-28 08:24:15.866925] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:18.796 [2024-11-28 08:24:15.866936] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:18.796 [2024-11-28 08:24:15.866942] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.796 [2024-11-28 08:24:15.866946] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb55690) 00:25:18.796 [2024-11-28 08:24:15.866953] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.796 [2024-11-28 08:24:15.866965] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb7100, cid 0, qid 0 00:25:18.796 [2024-11-28 08:24:15.866970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb7280, cid 1, qid 0 00:25:18.796 [2024-11-28 08:24:15.866975] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb7400, cid 2, qid 0 00:25:18.796 [2024-11-28 08:24:15.866980] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb7580, cid 3, qid 0 00:25:18.796 [2024-11-28 08:24:15.866985] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb7700, cid 4, qid 0 00:25:18.796 [2024-11-28 08:24:15.867246] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.796 [2024-11-28 08:24:15.867253] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.796 [2024-11-28 08:24:15.867256] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.796 [2024-11-28 08:24:15.867260] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb7700) on tqpair=0xb55690 00:25:18.796 [2024-11-28 08:24:15.867265] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:25:18.797 [2024-11-28 08:24:15.867270] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:18.797 [2024-11-28 08:24:15.867281] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:25:18.797 [2024-11-28 08:24:15.867288] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:18.797 [2024-11-28 08:24:15.867294] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.797 [2024-11-28 08:24:15.867300] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.797 [2024-11-28 08:24:15.867304] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb55690) 00:25:18.797 [2024-11-28 08:24:15.867311] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:18.797 [2024-11-28 08:24:15.867322] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb7700, cid 4, qid 0 00:25:18.797 [2024-11-28 08:24:15.867510] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.797 [2024-11-28 08:24:15.867516] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.797 [2024-11-28 08:24:15.867520] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.797 [2024-11-28 08:24:15.867524] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb7700) on tqpair=0xb55690 00:25:18.797 [2024-11-28 08:24:15.867591] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:25:18.797 [2024-11-28 08:24:15.867600] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:18.797 [2024-11-28 08:24:15.867608] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.797 [2024-11-28 08:24:15.867612] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb55690) 00:25:18.797 [2024-11-28 08:24:15.867618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.797 [2024-11-28 08:24:15.867629] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb7700, cid 4, qid 0 00:25:18.797 [2024-11-28 08:24:15.867817] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:18.797 [2024-11-28 08:24:15.867824] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:18.797 [2024-11-28 08:24:15.867827] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:18.797 [2024-11-28 08:24:15.867831] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb55690): datao=0, datal=4096, cccid=4 00:25:18.797 [2024-11-28 08:24:15.867835] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbb7700) on tqpair(0xb55690): expected_datao=0, payload_size=4096 00:25:18.797 [2024-11-28 08:24:15.867840] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.797 [2024-11-28 08:24:15.867887] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:18.797 [2024-11-28 08:24:15.867891] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:18.797 [2024-11-28 08:24:15.868066] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.797 [2024-11-28 08:24:15.868072] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.797 [2024-11-28 08:24:15.868076] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.797 [2024-11-28 08:24:15.868080] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb7700) on tqpair=0xb55690 00:25:18.797 [2024-11-28 08:24:15.868091] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:25:18.797 [2024-11-28 08:24:15.868106] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:25:18.797 [2024-11-28 08:24:15.868116] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:25:18.797 [2024-11-28 08:24:15.868123] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.797 [2024-11-28 08:24:15.868127] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb55690) 00:25:18.797 [2024-11-28 08:24:15.868133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.797 [2024-11-28 08:24:15.868144] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb7700, cid 4, qid 0 00:25:18.797 [2024-11-28 08:24:15.872172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:18.797 [2024-11-28 08:24:15.872186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:18.797 [2024-11-28 08:24:15.872190] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:18.797 [2024-11-28 08:24:15.872194] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb55690): datao=0, datal=4096, cccid=4 00:25:18.797 [2024-11-28 08:24:15.872198] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbb7700) on tqpair(0xb55690): expected_datao=0, payload_size=4096 00:25:18.797 [2024-11-28 08:24:15.872203] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.797 [2024-11-28 08:24:15.872210] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:18.797 [2024-11-28 08:24:15.872213] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:18.797 [2024-11-28 08:24:15.911172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.797 [2024-11-28 08:24:15.911185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.797 [2024-11-28 08:24:15.911188] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.797 [2024-11-28 08:24:15.911192] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb7700) on tqpair=0xb55690 00:25:18.797 [2024-11-28 08:24:15.911206] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:18.797 [2024-11-28 08:24:15.911216] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:18.797 [2024-11-28 08:24:15.911225] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.797 [2024-11-28 08:24:15.911229] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb55690) 00:25:18.797 [2024-11-28 08:24:15.911237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.797 [2024-11-28 08:24:15.911251] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb7700, cid 4, qid 0 00:25:18.797 [2024-11-28 08:24:15.911439] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:18.797 [2024-11-28 08:24:15.911446] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:18.797 [2024-11-28 08:24:15.911449] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:18.797 [2024-11-28 08:24:15.911453] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb55690): datao=0, datal=4096, cccid=4 00:25:18.797 [2024-11-28 08:24:15.911457] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbb7700) on tqpair(0xb55690): expected_datao=0, payload_size=4096 00:25:18.797 [2024-11-28 08:24:15.911462] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.797 [2024-11-28 08:24:15.911469] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:18.797 [2024-11-28 08:24:15.911472] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:18.797 [2024-11-28 08:24:15.911629] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.797 [2024-11-28 08:24:15.911636] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.797 [2024-11-28 08:24:15.911639] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.797 [2024-11-28 08:24:15.911643] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb7700) on tqpair=0xb55690 00:25:18.797 [2024-11-28 08:24:15.911656] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:18.797 [2024-11-28 08:24:15.911665] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:25:18.797 [2024-11-28 08:24:15.911674] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:25:18.797 [2024-11-28 08:24:15.911680] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:25:18.797 [2024-11-28 08:24:15.911685] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:18.797 [2024-11-28 08:24:15.911694] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:25:18.797 [2024-11-28 08:24:15.911700] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:25:18.797 [2024-11-28 08:24:15.911704] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:25:18.797 [2024-11-28 08:24:15.911710] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:25:18.797 [2024-11-28 08:24:15.911727] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.797 [2024-11-28 08:24:15.911732] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb55690) 00:25:18.797 [2024-11-28 08:24:15.911739] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.797 [2024-11-28 08:24:15.911746] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.797 [2024-11-28 08:24:15.911750] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.797 [2024-11-28 08:24:15.911753] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb55690) 00:25:18.797 [2024-11-28 08:24:15.911760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:18.797 [2024-11-28 08:24:15.911774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb7700, cid 4, qid 0 00:25:18.797 [2024-11-28 08:24:15.911780] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb7880, cid 5, qid 0 00:25:18.797 [2024-11-28 08:24:15.912010] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.797 [2024-11-28 08:24:15.912017] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.797 [2024-11-28 08:24:15.912021] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.797 [2024-11-28 08:24:15.912025] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb7700) on tqpair=0xb55690 00:25:18.797 [2024-11-28 08:24:15.912031] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.797 [2024-11-28 08:24:15.912037] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.797 [2024-11-28 08:24:15.912041] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.797 [2024-11-28 08:24:15.912045] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb7880) on tqpair=0xb55690 00:25:18.797 [2024-11-28 08:24:15.912054] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.797 [2024-11-28 08:24:15.912058] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb55690) 00:25:18.797 [2024-11-28 08:24:15.912064] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.797 [2024-11-28 08:24:15.912074] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb7880, cid 5, qid 0 00:25:18.797 [2024-11-28 08:24:15.912275] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.798 [2024-11-28 08:24:15.912282] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.798 [2024-11-28 08:24:15.912286] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.798 [2024-11-28 08:24:15.912290] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb7880) on tqpair=0xb55690 00:25:18.798 [2024-11-28 08:24:15.912299] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.798 [2024-11-28 08:24:15.912303] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb55690) 00:25:18.798 [2024-11-28 08:24:15.912310] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.798 [2024-11-28 08:24:15.912320] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb7880, cid 5, qid 0 00:25:18.798 [2024-11-28 08:24:15.912525] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.798 [2024-11-28 08:24:15.912532] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.798 [2024-11-28 08:24:15.912535] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.798 [2024-11-28 08:24:15.912539] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb7880) on tqpair=0xb55690 00:25:18.798 [2024-11-28 08:24:15.912548] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.798 [2024-11-28 08:24:15.912552] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb55690) 00:25:18.798 [2024-11-28 08:24:15.912559] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.798 [2024-11-28 08:24:15.912568] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb7880, cid 5, qid 0 00:25:18.798 [2024-11-28 08:24:15.912788] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.798 [2024-11-28 08:24:15.912796] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.798 [2024-11-28 08:24:15.912800] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.798 [2024-11-28 08:24:15.912804] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb7880) on tqpair=0xb55690 00:25:18.798 [2024-11-28 08:24:15.912821] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.798 [2024-11-28 08:24:15.912826] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb55690) 00:25:18.798 [2024-11-28 08:24:15.912832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.798 [2024-11-28 08:24:15.912840] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.798 [2024-11-28 08:24:15.912844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb55690) 00:25:18.798 [2024-11-28 08:24:15.912850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.798 [2024-11-28 08:24:15.912858] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.798 [2024-11-28 08:24:15.912862] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xb55690) 00:25:18.798 [2024-11-28 08:24:15.912868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.798 [2024-11-28 08:24:15.912876] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.798 [2024-11-28 08:24:15.912880] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb55690) 00:25:18.798 [2024-11-28 08:24:15.912886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.798 [2024-11-28 08:24:15.912897] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb7880, cid 5, qid 0 00:25:18.798 [2024-11-28 08:24:15.912902] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb7700, cid 4, qid 0 00:25:18.798 [2024-11-28 08:24:15.912907] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb7a00, cid 6, qid 0 00:25:18.798 [2024-11-28 08:24:15.912912] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb7b80, cid 7, qid 0 00:25:18.798 [2024-11-28 08:24:15.913237] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:18.798 [2024-11-28 08:24:15.913245] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:18.798 [2024-11-28 08:24:15.913248] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:18.798 [2024-11-28 08:24:15.913252] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb55690): datao=0, datal=8192, cccid=5 00:25:18.798 [2024-11-28 08:24:15.913256] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbb7880) on tqpair(0xb55690): expected_datao=0, payload_size=8192 00:25:18.798 [2024-11-28 08:24:15.913266] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.798 [2024-11-28 08:24:15.913348] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:18.798 [2024-11-28 08:24:15.913353] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:18.798 [2024-11-28 08:24:15.913359] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:18.798 [2024-11-28 08:24:15.913366] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:18.798 [2024-11-28 08:24:15.913370] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:18.798 [2024-11-28 08:24:15.913374] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb55690): datao=0, datal=512, cccid=4 00:25:18.798 [2024-11-28 08:24:15.913379] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbb7700) on tqpair(0xb55690): expected_datao=0, payload_size=512 00:25:18.798 [2024-11-28 08:24:15.913383] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.798 [2024-11-28 08:24:15.913389] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:18.798 [2024-11-28 08:24:15.913393] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:18.798 [2024-11-28 08:24:15.913399] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:18.798 [2024-11-28 08:24:15.913405] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:18.798 [2024-11-28 08:24:15.913411] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:18.798 [2024-11-28 08:24:15.913415] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb55690): datao=0, datal=512, cccid=6 00:25:18.798 [2024-11-28 08:24:15.913419] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbb7a00) on tqpair(0xb55690): expected_datao=0, payload_size=512 00:25:18.798 [2024-11-28 08:24:15.913424] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.798 [2024-11-28 08:24:15.913430] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:18.798 [2024-11-28 08:24:15.913433] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:18.798 [2024-11-28 08:24:15.913439] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:18.798 [2024-11-28 08:24:15.913445] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:18.798 [2024-11-28 08:24:15.913451] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:18.798 [2024-11-28 08:24:15.913457] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb55690): datao=0, datal=4096, cccid=7 00:25:18.798 [2024-11-28 08:24:15.913462] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbb7b80) on tqpair(0xb55690): expected_datao=0, payload_size=4096 00:25:18.798 [2024-11-28 08:24:15.913466] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.798 [2024-11-28 08:24:15.913473] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:18.798 [2024-11-28 08:24:15.913477] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:18.798 [2024-11-28 08:24:15.913492] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.798 [2024-11-28 08:24:15.913498] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.798 [2024-11-28 08:24:15.913502] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.798 [2024-11-28 08:24:15.913506] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb7880) on tqpair=0xb55690 00:25:18.798 [2024-11-28 08:24:15.913518] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.798 [2024-11-28 08:24:15.913525] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.798 [2024-11-28 08:24:15.913529] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.798 [2024-11-28 08:24:15.913533] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb7700) on tqpair=0xb55690 00:25:18.798 [2024-11-28 08:24:15.913544] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.798 [2024-11-28 08:24:15.913550] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.798 [2024-11-28 08:24:15.913555] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.798 [2024-11-28 08:24:15.913559] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb7a00) on tqpair=0xb55690 00:25:18.798 [2024-11-28 08:24:15.913569] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.798 [2024-11-28 08:24:15.913575] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.798 [2024-11-28 08:24:15.913581] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.798 [2024-11-28 08:24:15.913586] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb7b80) on tqpair=0xb55690 00:25:18.798 ===================================================== 00:25:18.798 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:18.798 ===================================================== 00:25:18.798 Controller Capabilities/Features 00:25:18.798 ================================ 00:25:18.798 Vendor ID: 8086 00:25:18.798 Subsystem Vendor ID: 8086 00:25:18.798 Serial Number: SPDK00000000000001 00:25:18.798 Model Number: SPDK bdev Controller 00:25:18.798 Firmware Version: 25.01 00:25:18.798 Recommended Arb Burst: 6 00:25:18.798 IEEE OUI Identifier: e4 d2 5c 00:25:18.798 Multi-path I/O 00:25:18.798 May have multiple subsystem ports: Yes 00:25:18.798 May have multiple controllers: Yes 00:25:18.798 Associated with SR-IOV VF: No 00:25:18.798 Max Data Transfer Size: 131072 00:25:18.798 Max Number of Namespaces: 32 00:25:18.798 Max Number of I/O Queues: 127 00:25:18.798 NVMe Specification Version (VS): 1.3 00:25:18.798 NVMe Specification Version (Identify): 1.3 00:25:18.798 Maximum Queue Entries: 128 00:25:18.798 Contiguous Queues Required: Yes 00:25:18.798 Arbitration Mechanisms Supported 00:25:18.798 Weighted Round Robin: Not Supported 00:25:18.798 Vendor Specific: Not Supported 00:25:18.798 Reset Timeout: 15000 ms 00:25:18.798 Doorbell Stride: 4 bytes 00:25:18.798 NVM Subsystem Reset: Not Supported 00:25:18.798 Command Sets Supported 00:25:18.798 NVM Command Set: Supported 00:25:18.798 Boot Partition: Not Supported 00:25:18.798 Memory Page Size Minimum: 4096 bytes 00:25:18.798 Memory Page Size Maximum: 4096 bytes 00:25:18.798 Persistent Memory Region: Not Supported 00:25:18.798 Optional Asynchronous Events Supported 00:25:18.798 Namespace Attribute Notices: Supported 00:25:18.798 Firmware Activation Notices: Not Supported 00:25:18.798 ANA Change Notices: Not Supported 00:25:18.798 PLE Aggregate Log Change Notices: Not Supported 00:25:18.799 LBA Status Info Alert Notices: Not Supported 00:25:18.799 EGE Aggregate Log Change Notices: Not Supported 00:25:18.799 Normal NVM Subsystem Shutdown event: Not Supported 00:25:18.799 Zone Descriptor Change Notices: Not Supported 00:25:18.799 Discovery Log Change Notices: Not Supported 00:25:18.799 Controller Attributes 00:25:18.799 128-bit Host Identifier: Supported 00:25:18.799 Non-Operational Permissive Mode: Not Supported 00:25:18.799 NVM Sets: Not Supported 00:25:18.799 Read Recovery Levels: Not Supported 00:25:18.799 Endurance Groups: Not Supported 00:25:18.799 Predictable Latency Mode: Not Supported 00:25:18.799 Traffic Based Keep ALive: Not Supported 00:25:18.799 Namespace Granularity: Not Supported 00:25:18.799 SQ Associations: Not Supported 00:25:18.799 UUID List: Not Supported 00:25:18.799 Multi-Domain Subsystem: Not Supported 00:25:18.799 Fixed Capacity Management: Not Supported 00:25:18.799 Variable Capacity Management: Not Supported 00:25:18.799 Delete Endurance Group: Not Supported 00:25:18.799 Delete NVM Set: Not Supported 00:25:18.799 Extended LBA Formats Supported: Not Supported 00:25:18.799 Flexible Data Placement Supported: Not Supported 00:25:18.799 00:25:18.799 Controller Memory Buffer Support 00:25:18.799 ================================ 00:25:18.799 Supported: No 00:25:18.799 00:25:18.799 Persistent Memory Region Support 00:25:18.799 ================================ 00:25:18.799 Supported: No 00:25:18.799 00:25:18.799 Admin Command Set Attributes 00:25:18.799 ============================ 00:25:18.799 Security Send/Receive: Not Supported 00:25:18.799 Format NVM: Not Supported 00:25:18.799 Firmware Activate/Download: Not Supported 00:25:18.799 Namespace Management: Not Supported 00:25:18.799 Device Self-Test: Not Supported 00:25:18.799 Directives: Not Supported 00:25:18.799 NVMe-MI: Not Supported 00:25:18.799 Virtualization Management: Not Supported 00:25:18.799 Doorbell Buffer Config: Not Supported 00:25:18.799 Get LBA Status Capability: Not Supported 00:25:18.799 Command & Feature Lockdown Capability: Not Supported 00:25:18.799 Abort Command Limit: 4 00:25:18.799 Async Event Request Limit: 4 00:25:18.799 Number of Firmware Slots: N/A 00:25:18.799 Firmware Slot 1 Read-Only: N/A 00:25:18.799 Firmware Activation Without Reset: N/A 00:25:18.799 Multiple Update Detection Support: N/A 00:25:18.799 Firmware Update Granularity: No Information Provided 00:25:18.799 Per-Namespace SMART Log: No 00:25:18.799 Asymmetric Namespace Access Log Page: Not Supported 00:25:18.799 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:18.799 Command Effects Log Page: Supported 00:25:18.799 Get Log Page Extended Data: Supported 00:25:18.799 Telemetry Log Pages: Not Supported 00:25:18.799 Persistent Event Log Pages: Not Supported 00:25:18.799 Supported Log Pages Log Page: May Support 00:25:18.799 Commands Supported & Effects Log Page: Not Supported 00:25:18.799 Feature Identifiers & Effects Log Page:May Support 00:25:18.799 NVMe-MI Commands & Effects Log Page: May Support 00:25:18.799 Data Area 4 for Telemetry Log: Not Supported 00:25:18.799 Error Log Page Entries Supported: 128 00:25:18.799 Keep Alive: Supported 00:25:18.799 Keep Alive Granularity: 10000 ms 00:25:18.799 00:25:18.799 NVM Command Set Attributes 00:25:18.799 ========================== 00:25:18.799 Submission Queue Entry Size 00:25:18.799 Max: 64 00:25:18.799 Min: 64 00:25:18.799 Completion Queue Entry Size 00:25:18.799 Max: 16 00:25:18.799 Min: 16 00:25:18.799 Number of Namespaces: 32 00:25:18.799 Compare Command: Supported 00:25:18.799 Write Uncorrectable Command: Not Supported 00:25:18.799 Dataset Management Command: Supported 00:25:18.799 Write Zeroes Command: Supported 00:25:18.799 Set Features Save Field: Not Supported 00:25:18.799 Reservations: Supported 00:25:18.799 Timestamp: Not Supported 00:25:18.799 Copy: Supported 00:25:18.799 Volatile Write Cache: Present 00:25:18.799 Atomic Write Unit (Normal): 1 00:25:18.799 Atomic Write Unit (PFail): 1 00:25:18.799 Atomic Compare & Write Unit: 1 00:25:18.799 Fused Compare & Write: Supported 00:25:18.799 Scatter-Gather List 00:25:18.799 SGL Command Set: Supported 00:25:18.799 SGL Keyed: Supported 00:25:18.799 SGL Bit Bucket Descriptor: Not Supported 00:25:18.799 SGL Metadata Pointer: Not Supported 00:25:18.799 Oversized SGL: Not Supported 00:25:18.799 SGL Metadata Address: Not Supported 00:25:18.799 SGL Offset: Supported 00:25:18.799 Transport SGL Data Block: Not Supported 00:25:18.799 Replay Protected Memory Block: Not Supported 00:25:18.799 00:25:18.799 Firmware Slot Information 00:25:18.799 ========================= 00:25:18.799 Active slot: 1 00:25:18.799 Slot 1 Firmware Revision: 25.01 00:25:18.799 00:25:18.799 00:25:18.799 Commands Supported and Effects 00:25:18.799 ============================== 00:25:18.799 Admin Commands 00:25:18.799 -------------- 00:25:18.799 Get Log Page (02h): Supported 00:25:18.799 Identify (06h): Supported 00:25:18.799 Abort (08h): Supported 00:25:18.799 Set Features (09h): Supported 00:25:18.799 Get Features (0Ah): Supported 00:25:18.799 Asynchronous Event Request (0Ch): Supported 00:25:18.799 Keep Alive (18h): Supported 00:25:18.799 I/O Commands 00:25:18.799 ------------ 00:25:18.799 Flush (00h): Supported LBA-Change 00:25:18.799 Write (01h): Supported LBA-Change 00:25:18.799 Read (02h): Supported 00:25:18.799 Compare (05h): Supported 00:25:18.799 Write Zeroes (08h): Supported LBA-Change 00:25:18.799 Dataset Management (09h): Supported LBA-Change 00:25:18.799 Copy (19h): Supported LBA-Change 00:25:18.799 00:25:18.799 Error Log 00:25:18.799 ========= 00:25:18.799 00:25:18.799 Arbitration 00:25:18.799 =========== 00:25:18.799 Arbitration Burst: 1 00:25:18.799 00:25:18.799 Power Management 00:25:18.799 ================ 00:25:18.799 Number of Power States: 1 00:25:18.799 Current Power State: Power State #0 00:25:18.799 Power State #0: 00:25:18.799 Max Power: 0.00 W 00:25:18.799 Non-Operational State: Operational 00:25:18.799 Entry Latency: Not Reported 00:25:18.799 Exit Latency: Not Reported 00:25:18.799 Relative Read Throughput: 0 00:25:18.799 Relative Read Latency: 0 00:25:18.799 Relative Write Throughput: 0 00:25:18.799 Relative Write Latency: 0 00:25:18.799 Idle Power: Not Reported 00:25:18.799 Active Power: Not Reported 00:25:18.799 Non-Operational Permissive Mode: Not Supported 00:25:18.799 00:25:18.799 Health Information 00:25:18.799 ================== 00:25:18.799 Critical Warnings: 00:25:18.799 Available Spare Space: OK 00:25:18.799 Temperature: OK 00:25:18.799 Device Reliability: OK 00:25:18.799 Read Only: No 00:25:18.799 Volatile Memory Backup: OK 00:25:18.799 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:18.799 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:25:18.799 Available Spare: 0% 00:25:18.799 Available Spare Threshold: 0% 00:25:18.799 Life Percentage Used:[2024-11-28 08:24:15.913689] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.799 [2024-11-28 08:24:15.913694] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb55690) 00:25:18.799 [2024-11-28 08:24:15.913701] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.799 [2024-11-28 08:24:15.913713] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb7b80, cid 7, qid 0 00:25:18.799 [2024-11-28 08:24:15.913932] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.799 [2024-11-28 08:24:15.913939] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.799 [2024-11-28 08:24:15.913942] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.799 [2024-11-28 08:24:15.913946] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb7b80) on tqpair=0xb55690 00:25:18.799 [2024-11-28 08:24:15.913981] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:25:18.799 [2024-11-28 08:24:15.913991] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb7100) on tqpair=0xb55690 00:25:18.799 [2024-11-28 08:24:15.913998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.799 [2024-11-28 08:24:15.914003] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb7280) on tqpair=0xb55690 00:25:18.799 [2024-11-28 08:24:15.914008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.799 [2024-11-28 08:24:15.914013] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb7400) on tqpair=0xb55690 00:25:18.799 [2024-11-28 08:24:15.914018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.799 [2024-11-28 08:24:15.914023] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb7580) on tqpair=0xb55690 00:25:18.799 [2024-11-28 08:24:15.914027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:18.800 [2024-11-28 08:24:15.914036] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.800 [2024-11-28 08:24:15.914040] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.800 [2024-11-28 08:24:15.914043] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb55690) 00:25:18.800 [2024-11-28 08:24:15.914051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.800 [2024-11-28 08:24:15.914062] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb7580, cid 3, qid 0 00:25:18.800 [2024-11-28 08:24:15.914241] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.800 [2024-11-28 08:24:15.914250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.800 [2024-11-28 08:24:15.914254] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.800 [2024-11-28 08:24:15.914258] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb7580) on tqpair=0xb55690 00:25:18.800 [2024-11-28 08:24:15.914265] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.800 [2024-11-28 08:24:15.914269] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.800 [2024-11-28 08:24:15.914273] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb55690) 00:25:18.800 [2024-11-28 08:24:15.914279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.800 [2024-11-28 08:24:15.914293] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb7580, cid 3, qid 0 00:25:18.800 [2024-11-28 08:24:15.914529] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.800 [2024-11-28 08:24:15.914536] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.800 [2024-11-28 08:24:15.914540] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.800 [2024-11-28 08:24:15.914544] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb7580) on tqpair=0xb55690 00:25:18.800 [2024-11-28 08:24:15.914548] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:25:18.800 [2024-11-28 08:24:15.914553] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:25:18.800 [2024-11-28 08:24:15.914563] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.800 [2024-11-28 08:24:15.914567] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.800 [2024-11-28 08:24:15.914570] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb55690) 00:25:18.800 [2024-11-28 08:24:15.914577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.800 [2024-11-28 08:24:15.914588] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb7580, cid 3, qid 0 00:25:18.800 [2024-11-28 08:24:15.914818] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.800 [2024-11-28 08:24:15.914825] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.800 [2024-11-28 08:24:15.914828] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.800 [2024-11-28 08:24:15.914832] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb7580) on tqpair=0xb55690 00:25:18.800 [2024-11-28 08:24:15.914843] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.800 [2024-11-28 08:24:15.914847] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.800 [2024-11-28 08:24:15.914850] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb55690) 00:25:18.800 [2024-11-28 08:24:15.914857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.800 [2024-11-28 08:24:15.914867] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb7580, cid 3, qid 0 00:25:18.800 [2024-11-28 08:24:15.915097] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.800 [2024-11-28 08:24:15.915103] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.800 [2024-11-28 08:24:15.915107] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.800 [2024-11-28 08:24:15.915111] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb7580) on tqpair=0xb55690 00:25:18.800 [2024-11-28 08:24:15.915121] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:18.800 [2024-11-28 08:24:15.915125] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:18.800 [2024-11-28 08:24:15.915129] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb55690) 00:25:18.800 [2024-11-28 08:24:15.915135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.800 [2024-11-28 08:24:15.915145] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb7580, cid 3, qid 0 00:25:18.800 [2024-11-28 08:24:15.919172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:18.800 [2024-11-28 08:24:15.919183] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:18.800 [2024-11-28 08:24:15.919186] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:18.800 [2024-11-28 08:24:15.919190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb7580) on tqpair=0xb55690 00:25:18.800 [2024-11-28 08:24:15.919199] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:25:18.800 0% 00:25:18.800 Data Units Read: 0 00:25:18.800 Data Units Written: 0 00:25:18.800 Host Read Commands: 0 00:25:18.800 Host Write Commands: 0 00:25:18.800 Controller Busy Time: 0 minutes 00:25:18.800 Power Cycles: 0 00:25:18.800 Power On Hours: 0 hours 00:25:18.800 Unsafe Shutdowns: 0 00:25:18.800 Unrecoverable Media Errors: 0 00:25:18.800 Lifetime Error Log Entries: 0 00:25:18.800 Warning Temperature Time: 0 minutes 00:25:18.800 Critical Temperature Time: 0 minutes 00:25:18.800 00:25:18.800 Number of Queues 00:25:18.800 ================ 00:25:18.800 Number of I/O Submission Queues: 127 00:25:18.800 Number of I/O Completion Queues: 127 00:25:18.800 00:25:18.800 Active Namespaces 00:25:18.800 ================= 00:25:18.800 Namespace ID:1 00:25:18.800 Error Recovery Timeout: Unlimited 00:25:18.800 Command Set Identifier: NVM (00h) 00:25:18.800 Deallocate: Supported 00:25:18.800 Deallocated/Unwritten Error: Not Supported 00:25:18.800 Deallocated Read Value: Unknown 00:25:18.800 Deallocate in Write Zeroes: Not Supported 00:25:18.800 Deallocated Guard Field: 0xFFFF 00:25:18.800 Flush: Supported 00:25:18.800 Reservation: Supported 00:25:18.800 Namespace Sharing Capabilities: Multiple Controllers 00:25:18.800 Size (in LBAs): 131072 (0GiB) 00:25:18.800 Capacity (in LBAs): 131072 (0GiB) 00:25:18.800 Utilization (in LBAs): 131072 (0GiB) 00:25:18.800 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:18.800 EUI64: ABCDEF0123456789 00:25:18.800 UUID: 4cc6da8c-d5be-4e4a-8aef-18cc189a10d1 00:25:18.800 Thin Provisioning: Not Supported 00:25:18.800 Per-NS Atomic Units: Yes 00:25:18.800 Atomic Boundary Size (Normal): 0 00:25:18.800 Atomic Boundary Size (PFail): 0 00:25:18.800 Atomic Boundary Offset: 0 00:25:18.800 Maximum Single Source Range Length: 65535 00:25:18.800 Maximum Copy Length: 65535 00:25:18.800 Maximum Source Range Count: 1 00:25:18.800 NGUID/EUI64 Never Reused: No 00:25:18.800 Namespace Write Protected: No 00:25:18.800 Number of LBA Formats: 1 00:25:18.800 Current LBA Format: LBA Format #00 00:25:18.800 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:18.800 00:25:18.800 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:25:18.800 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:18.800 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.800 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:18.800 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.800 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:18.800 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:25:18.800 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:18.800 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:25:18.800 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:18.800 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:25:18.800 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:18.800 08:24:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:18.800 rmmod nvme_tcp 00:25:18.800 rmmod nvme_fabrics 00:25:18.800 rmmod nvme_keyring 00:25:18.800 08:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:18.800 08:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:25:18.800 08:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:25:18.800 08:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2065920 ']' 00:25:18.801 08:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2065920 00:25:18.801 08:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2065920 ']' 00:25:18.801 08:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2065920 00:25:18.801 08:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:25:18.801 08:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:18.801 08:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2065920 00:25:19.063 08:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:19.063 08:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:19.063 08:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2065920' 00:25:19.063 killing process with pid 2065920 00:25:19.063 08:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2065920 00:25:19.063 08:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2065920 00:25:19.063 08:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:19.063 08:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:19.063 08:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:19.063 08:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:25:19.063 08:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:25:19.063 08:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:19.063 08:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:25:19.063 08:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:19.063 08:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:19.063 08:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.063 08:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:19.063 08:24:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.611 08:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:21.611 00:25:21.611 real 0m11.710s 00:25:21.611 user 0m8.482s 00:25:21.611 sys 0m6.296s 00:25:21.611 08:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:21.611 08:24:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:21.611 ************************************ 00:25:21.611 END TEST nvmf_identify 00:25:21.611 ************************************ 00:25:21.611 08:24:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.612 ************************************ 00:25:21.612 START TEST nvmf_perf 00:25:21.612 ************************************ 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:21.612 * Looking for test storage... 00:25:21.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:21.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.612 --rc genhtml_branch_coverage=1 00:25:21.612 --rc genhtml_function_coverage=1 00:25:21.612 --rc genhtml_legend=1 00:25:21.612 --rc geninfo_all_blocks=1 00:25:21.612 --rc geninfo_unexecuted_blocks=1 00:25:21.612 00:25:21.612 ' 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:21.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.612 --rc genhtml_branch_coverage=1 00:25:21.612 --rc genhtml_function_coverage=1 00:25:21.612 --rc genhtml_legend=1 00:25:21.612 --rc geninfo_all_blocks=1 00:25:21.612 --rc geninfo_unexecuted_blocks=1 00:25:21.612 00:25:21.612 ' 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:21.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.612 --rc genhtml_branch_coverage=1 00:25:21.612 --rc genhtml_function_coverage=1 00:25:21.612 --rc genhtml_legend=1 00:25:21.612 --rc geninfo_all_blocks=1 00:25:21.612 --rc geninfo_unexecuted_blocks=1 00:25:21.612 00:25:21.612 ' 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:21.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:21.612 --rc genhtml_branch_coverage=1 00:25:21.612 --rc genhtml_function_coverage=1 00:25:21.612 --rc genhtml_legend=1 00:25:21.612 --rc geninfo_all_blocks=1 00:25:21.612 --rc geninfo_unexecuted_blocks=1 00:25:21.612 00:25:21.612 ' 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:21.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:21.612 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:21.613 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:21.613 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:25:21.613 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:21.613 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:21.613 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:21.613 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:21.613 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:21.613 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.613 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:21.613 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.613 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:21.613 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:21.613 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:21.613 08:24:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:29.839 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:29.839 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:29.839 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:29.839 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:29.839 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:29.839 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:29.839 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:29.839 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:25:29.839 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:29.840 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:29.840 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:29.840 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:29.840 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:29.840 08:24:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:29.840 08:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:29.840 08:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:29.840 08:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:29.840 08:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:29.840 08:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:29.840 08:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:29.840 08:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:29.840 08:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:29.840 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:29.840 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:25:29.840 00:25:29.840 --- 10.0.0.2 ping statistics --- 00:25:29.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.840 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:25:29.840 08:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:29.840 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:29.840 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:25:29.840 00:25:29.840 --- 10.0.0.1 ping statistics --- 00:25:29.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.840 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:25:29.840 08:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:29.840 08:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:25:29.840 08:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:29.840 08:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:29.840 08:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:29.840 08:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:29.840 08:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:29.840 08:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:29.840 08:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:29.840 08:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:29.840 08:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:29.840 08:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:29.840 08:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:29.840 08:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2070446 00:25:29.840 08:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2070446 00:25:29.840 08:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:29.840 08:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2070446 ']' 00:25:29.840 08:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:29.840 08:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:29.840 08:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:29.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:29.840 08:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:29.840 08:24:26 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:29.840 [2024-11-28 08:24:26.323321] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:25:29.840 [2024-11-28 08:24:26.323388] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:29.840 [2024-11-28 08:24:26.423882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:29.840 [2024-11-28 08:24:26.477470] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:29.840 [2024-11-28 08:24:26.477522] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:29.840 [2024-11-28 08:24:26.477532] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:29.840 [2024-11-28 08:24:26.477539] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:29.840 [2024-11-28 08:24:26.477545] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:29.840 [2024-11-28 08:24:26.479587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.840 [2024-11-28 08:24:26.479748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:29.840 [2024-11-28 08:24:26.479894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.840 [2024-11-28 08:24:26.479895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:30.101 08:24:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:30.101 08:24:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:25:30.101 08:24:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:30.101 08:24:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:30.101 08:24:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:30.101 08:24:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:30.101 08:24:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:30.101 08:24:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:30.671 08:24:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:30.671 08:24:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:30.671 08:24:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:25:30.671 08:24:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:30.931 08:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:30.931 08:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:25:30.931 08:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:30.931 08:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:30.931 08:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:31.192 [2024-11-28 08:24:28.305672] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:31.192 08:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:31.451 08:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:31.451 08:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:31.451 08:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:31.451 08:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:31.712 08:24:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:31.973 [2024-11-28 08:24:29.013573] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:31.973 08:24:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:31.973 08:24:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:25:31.973 08:24:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:31.973 08:24:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:31.973 08:24:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:33.356 Initializing NVMe Controllers 00:25:33.356 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:25:33.356 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:25:33.356 Initialization complete. Launching workers. 00:25:33.356 ======================================================== 00:25:33.356 Latency(us) 00:25:33.357 Device Information : IOPS MiB/s Average min max 00:25:33.357 PCIE (0000:65:00.0) NSID 1 from core 0: 77436.41 302.49 412.59 13.34 4968.62 00:25:33.357 ======================================================== 00:25:33.357 Total : 77436.41 302.49 412.59 13.34 4968.62 00:25:33.357 00:25:33.357 08:24:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:34.741 Initializing NVMe Controllers 00:25:34.741 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:34.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:34.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:34.741 Initialization complete. Launching workers. 00:25:34.741 ======================================================== 00:25:34.741 Latency(us) 00:25:34.741 Device Information : IOPS MiB/s Average min max 00:25:34.741 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 87.00 0.34 11914.08 216.78 45934.93 00:25:34.741 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 76.00 0.30 13218.30 7963.54 47889.61 00:25:34.741 ======================================================== 00:25:34.741 Total : 163.00 0.64 12522.19 216.78 47889.61 00:25:34.741 00:25:34.741 08:24:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:36.126 Initializing NVMe Controllers 00:25:36.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:36.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:36.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:36.126 Initialization complete. Launching workers. 00:25:36.126 ======================================================== 00:25:36.126 Latency(us) 00:25:36.126 Device Information : IOPS MiB/s Average min max 00:25:36.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11637.56 45.46 2749.90 409.30 6325.91 00:25:36.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3732.61 14.58 8573.77 7242.10 18498.11 00:25:36.126 ======================================================== 00:25:36.126 Total : 15370.18 60.04 4164.22 409.30 18498.11 00:25:36.126 00:25:36.126 08:24:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:36.126 08:24:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:36.126 08:24:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:38.666 Initializing NVMe Controllers 00:25:38.666 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:38.666 Controller IO queue size 128, less than required. 00:25:38.666 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:38.666 Controller IO queue size 128, less than required. 00:25:38.666 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:38.666 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:38.666 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:38.666 Initialization complete. Launching workers. 00:25:38.666 ======================================================== 00:25:38.666 Latency(us) 00:25:38.666 Device Information : IOPS MiB/s Average min max 00:25:38.666 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1929.59 482.40 67330.56 39228.14 115199.37 00:25:38.667 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 600.37 150.09 225188.44 47308.68 345995.76 00:25:38.667 ======================================================== 00:25:38.667 Total : 2529.96 632.49 104791.01 39228.14 345995.76 00:25:38.667 00:25:38.667 08:24:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:38.927 No valid NVMe controllers or AIO or URING devices found 00:25:38.927 Initializing NVMe Controllers 00:25:38.927 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:38.927 Controller IO queue size 128, less than required. 00:25:38.927 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:38.927 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:38.927 Controller IO queue size 128, less than required. 00:25:38.927 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:38.927 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:38.927 WARNING: Some requested NVMe devices were skipped 00:25:38.927 08:24:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:41.472 Initializing NVMe Controllers 00:25:41.472 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:41.472 Controller IO queue size 128, less than required. 00:25:41.472 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:41.472 Controller IO queue size 128, less than required. 00:25:41.472 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:41.472 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:41.472 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:41.472 Initialization complete. Launching workers. 00:25:41.472 00:25:41.473 ==================== 00:25:41.473 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:41.473 TCP transport: 00:25:41.473 polls: 46810 00:25:41.473 idle_polls: 28364 00:25:41.473 sock_completions: 18446 00:25:41.473 nvme_completions: 7143 00:25:41.473 submitted_requests: 10680 00:25:41.473 queued_requests: 1 00:25:41.473 00:25:41.473 ==================== 00:25:41.473 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:41.473 TCP transport: 00:25:41.473 polls: 48537 00:25:41.473 idle_polls: 30784 00:25:41.473 sock_completions: 17753 00:25:41.473 nvme_completions: 6795 00:25:41.473 submitted_requests: 10168 00:25:41.473 queued_requests: 1 00:25:41.473 ======================================================== 00:25:41.473 Latency(us) 00:25:41.473 Device Information : IOPS MiB/s Average min max 00:25:41.473 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1785.44 446.36 72913.11 32448.16 126494.93 00:25:41.473 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1698.44 424.61 77199.30 32272.11 124748.47 00:25:41.473 ======================================================== 00:25:41.473 Total : 3483.88 870.97 75002.69 32272.11 126494.93 00:25:41.473 00:25:41.473 08:24:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:41.473 08:24:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:41.733 08:24:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:41.733 08:24:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:41.733 08:24:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:41.733 08:24:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:41.733 08:24:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:25:41.733 08:24:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:41.733 08:24:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:25:41.733 08:24:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:41.734 08:24:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:41.734 rmmod nvme_tcp 00:25:41.734 rmmod nvme_fabrics 00:25:41.734 rmmod nvme_keyring 00:25:41.734 08:24:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:41.734 08:24:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:25:41.734 08:24:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:25:41.734 08:24:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2070446 ']' 00:25:41.734 08:24:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2070446 00:25:41.734 08:24:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2070446 ']' 00:25:41.734 08:24:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2070446 00:25:41.734 08:24:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:25:41.734 08:24:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:41.734 08:24:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2070446 00:25:41.994 08:24:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:41.994 08:24:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:41.994 08:24:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2070446' 00:25:41.994 killing process with pid 2070446 00:25:41.994 08:24:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2070446 00:25:41.994 08:24:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2070446 00:25:43.911 08:24:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:43.911 08:24:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:43.911 08:24:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:43.911 08:24:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:25:43.911 08:24:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:25:43.911 08:24:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:25:43.911 08:24:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:43.911 08:24:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:43.911 08:24:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:43.911 08:24:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.911 08:24:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:43.911 08:24:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.831 08:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:45.831 00:25:45.831 real 0m24.645s 00:25:45.831 user 0m59.550s 00:25:45.831 sys 0m8.715s 00:25:45.831 08:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:45.831 08:24:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:45.831 ************************************ 00:25:45.831 END TEST nvmf_perf 00:25:45.831 ************************************ 00:25:46.092 08:24:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:46.092 08:24:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:46.092 08:24:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:46.092 08:24:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.092 ************************************ 00:25:46.092 START TEST nvmf_fio_host 00:25:46.092 ************************************ 00:25:46.092 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:46.092 * Looking for test storage... 00:25:46.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:46.092 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:46.092 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:25:46.092 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:46.092 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:46.092 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:46.092 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:46.092 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:46.092 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:46.092 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:46.092 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:46.092 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:46.092 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:46.092 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:46.092 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:46.092 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:46.092 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:25:46.092 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:25:46.092 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:46.092 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:46.092 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:25:46.092 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:25:46.092 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:46.092 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:25:46.092 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:46.092 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:25:46.092 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:25:46.092 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:46.092 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:25:46.092 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:46.355 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:46.355 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:46.355 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:25:46.355 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:46.355 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:46.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.355 --rc genhtml_branch_coverage=1 00:25:46.355 --rc genhtml_function_coverage=1 00:25:46.355 --rc genhtml_legend=1 00:25:46.355 --rc geninfo_all_blocks=1 00:25:46.355 --rc geninfo_unexecuted_blocks=1 00:25:46.355 00:25:46.355 ' 00:25:46.355 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:46.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.355 --rc genhtml_branch_coverage=1 00:25:46.355 --rc genhtml_function_coverage=1 00:25:46.355 --rc genhtml_legend=1 00:25:46.355 --rc geninfo_all_blocks=1 00:25:46.355 --rc geninfo_unexecuted_blocks=1 00:25:46.355 00:25:46.355 ' 00:25:46.355 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:46.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.355 --rc genhtml_branch_coverage=1 00:25:46.355 --rc genhtml_function_coverage=1 00:25:46.355 --rc genhtml_legend=1 00:25:46.355 --rc geninfo_all_blocks=1 00:25:46.355 --rc geninfo_unexecuted_blocks=1 00:25:46.355 00:25:46.355 ' 00:25:46.355 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:46.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.355 --rc genhtml_branch_coverage=1 00:25:46.355 --rc genhtml_function_coverage=1 00:25:46.355 --rc genhtml_legend=1 00:25:46.355 --rc geninfo_all_blocks=1 00:25:46.355 --rc geninfo_unexecuted_blocks=1 00:25:46.355 00:25:46.355 ' 00:25:46.355 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:46.355 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:46.355 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:46.355 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:46.355 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:46.355 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:46.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:46.356 08:24:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:54.493 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:54.493 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:54.494 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:54.494 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:54.494 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:54.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:54.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:25:54.494 00:25:54.494 --- 10.0.0.2 ping statistics --- 00:25:54.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.494 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:54.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:54.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:25:54.494 00:25:54.494 --- 10.0.0.1 ping statistics --- 00:25:54.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.494 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2077521 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2077521 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2077521 ']' 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:54.494 08:24:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.494 [2024-11-28 08:24:50.974564] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:25:54.494 [2024-11-28 08:24:50.974633] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:54.494 [2024-11-28 08:24:51.075193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:54.494 [2024-11-28 08:24:51.127737] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:54.494 [2024-11-28 08:24:51.127795] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:54.494 [2024-11-28 08:24:51.127804] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:54.494 [2024-11-28 08:24:51.127811] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:54.494 [2024-11-28 08:24:51.127817] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:54.494 [2024-11-28 08:24:51.130203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:54.494 [2024-11-28 08:24:51.130458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:54.494 [2024-11-28 08:24:51.130590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:54.494 [2024-11-28 08:24:51.130591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.757 08:24:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:54.757 08:24:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:25:54.757 08:24:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:54.757 [2024-11-28 08:24:51.964553] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:54.757 08:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:54.757 08:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:54.757 08:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.019 08:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:55.019 Malloc1 00:25:55.019 08:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:55.281 08:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:55.542 08:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:55.805 [2024-11-28 08:24:52.835944] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:55.805 08:24:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:55.805 08:24:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:55.805 08:24:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:55.805 08:24:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:55.805 08:24:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:55.805 08:24:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:55.805 08:24:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:55.805 08:24:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:55.805 08:24:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:55.805 08:24:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:55.805 08:24:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:55.805 08:24:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:55.805 08:24:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:55.805 08:24:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:56.097 08:24:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:56.097 08:24:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:56.097 08:24:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:56.097 08:24:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:56.097 08:24:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:56.097 08:24:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:56.097 08:24:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:56.097 08:24:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:56.097 08:24:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:56.097 08:24:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:56.357 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:56.357 fio-3.35 00:25:56.357 Starting 1 thread 00:25:58.895 00:25:58.895 test: (groupid=0, jobs=1): err= 0: pid=2078235: Thu Nov 28 08:24:55 2024 00:25:58.895 read: IOPS=13.7k, BW=53.7MiB/s (56.3MB/s)(108MiB/2004msec) 00:25:58.895 slat (usec): min=2, max=309, avg= 2.16, stdev= 2.58 00:25:58.895 clat (usec): min=3209, max=9038, avg=5132.87, stdev=384.98 00:25:58.895 lat (usec): min=3211, max=9051, avg=5135.03, stdev=385.23 00:25:58.895 clat percentiles (usec): 00:25:58.895 | 1.00th=[ 4293], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4883], 00:25:58.895 | 30.00th=[ 4948], 40.00th=[ 5014], 50.00th=[ 5145], 60.00th=[ 5211], 00:25:58.895 | 70.00th=[ 5276], 80.00th=[ 5407], 90.00th=[ 5538], 95.00th=[ 5669], 00:25:58.895 | 99.00th=[ 6063], 99.50th=[ 6521], 99.90th=[ 8225], 99.95th=[ 8455], 00:25:58.895 | 99.99th=[ 8979] 00:25:58.895 bw ( KiB/s): min=53952, max=55352, per=99.95%, avg=54968.00, stdev=678.73, samples=4 00:25:58.895 iops : min=13488, max=13838, avg=13742.00, stdev=169.68, samples=4 00:25:58.895 write: IOPS=13.7k, BW=53.6MiB/s (56.2MB/s)(107MiB/2004msec); 0 zone resets 00:25:58.895 slat (usec): min=2, max=280, avg= 2.24, stdev= 1.91 00:25:58.895 clat (usec): min=2519, max=7689, avg=4147.29, stdev=331.40 00:25:58.895 lat (usec): min=2521, max=7695, avg=4149.53, stdev=331.70 00:25:58.895 clat percentiles (usec): 00:25:58.895 | 1.00th=[ 3458], 5.00th=[ 3687], 10.00th=[ 3785], 20.00th=[ 3916], 00:25:58.895 | 30.00th=[ 4015], 40.00th=[ 4080], 50.00th=[ 4146], 60.00th=[ 4228], 00:25:58.895 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4555], 00:25:58.895 | 99.00th=[ 4883], 99.50th=[ 5997], 99.90th=[ 7046], 99.95th=[ 7373], 00:25:58.895 | 99.99th=[ 7570] 00:25:58.895 bw ( KiB/s): min=54264, max=55264, per=99.99%, avg=54902.00, stdev=452.09, samples=4 00:25:58.895 iops : min=13566, max=13816, avg=13725.50, stdev=113.02, samples=4 00:25:58.895 lat (msec) : 4=14.98%, 10=85.02% 00:25:58.895 cpu : usr=74.64%, sys=24.21%, ctx=27, majf=0, minf=16 00:25:58.895 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:58.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:58.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:58.895 issued rwts: total=27553,27509,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:58.895 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:58.895 00:25:58.895 Run status group 0 (all jobs): 00:25:58.895 READ: bw=53.7MiB/s (56.3MB/s), 53.7MiB/s-53.7MiB/s (56.3MB/s-56.3MB/s), io=108MiB (113MB), run=2004-2004msec 00:25:58.895 WRITE: bw=53.6MiB/s (56.2MB/s), 53.6MiB/s-53.6MiB/s (56.2MB/s-56.2MB/s), io=107MiB (113MB), run=2004-2004msec 00:25:58.895 08:24:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:58.895 08:24:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:58.895 08:24:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:58.895 08:24:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:58.895 08:24:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:58.895 08:24:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:58.895 08:24:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:58.895 08:24:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:58.895 08:24:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:58.895 08:24:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:58.895 08:24:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:58.895 08:24:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:58.895 08:24:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:58.895 08:24:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:58.895 08:24:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:58.895 08:24:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:58.895 08:24:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:58.895 08:24:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:58.895 08:24:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:58.895 08:24:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:58.895 08:24:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:58.895 08:24:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:59.156 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:59.156 fio-3.35 00:25:59.156 Starting 1 thread 00:26:01.700 00:26:01.700 test: (groupid=0, jobs=1): err= 0: pid=2078890: Thu Nov 28 08:24:58 2024 00:26:01.700 read: IOPS=9462, BW=148MiB/s (155MB/s)(296MiB/2005msec) 00:26:01.700 slat (usec): min=3, max=111, avg= 3.61, stdev= 1.64 00:26:01.700 clat (usec): min=2656, max=14641, avg=8306.68, stdev=2035.25 00:26:01.700 lat (usec): min=2660, max=14644, avg=8310.29, stdev=2035.41 00:26:01.700 clat percentiles (usec): 00:26:01.700 | 1.00th=[ 4113], 5.00th=[ 5276], 10.00th=[ 5800], 20.00th=[ 6456], 00:26:01.700 | 30.00th=[ 7046], 40.00th=[ 7570], 50.00th=[ 8160], 60.00th=[ 8848], 00:26:01.700 | 70.00th=[ 9372], 80.00th=[10290], 90.00th=[10945], 95.00th=[11600], 00:26:01.700 | 99.00th=[13173], 99.50th=[13566], 99.90th=[14222], 99.95th=[14353], 00:26:01.700 | 99.99th=[14615] 00:26:01.700 bw ( KiB/s): min=69760, max=82304, per=50.10%, avg=75856.00, stdev=5432.94, samples=4 00:26:01.700 iops : min= 4360, max= 5144, avg=4741.00, stdev=339.56, samples=4 00:26:01.700 write: IOPS=5543, BW=86.6MiB/s (90.8MB/s)(155MiB/1786msec); 0 zone resets 00:26:01.700 slat (usec): min=39, max=454, avg=41.06, stdev= 8.86 00:26:01.700 clat (usec): min=3070, max=17004, avg=9157.63, stdev=1482.26 00:26:01.700 lat (usec): min=3110, max=17137, avg=9198.69, stdev=1484.89 00:26:01.700 clat percentiles (usec): 00:26:01.700 | 1.00th=[ 5735], 5.00th=[ 7046], 10.00th=[ 7504], 20.00th=[ 7963], 00:26:01.700 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9372], 00:26:01.700 | 70.00th=[ 9765], 80.00th=[10290], 90.00th=[10945], 95.00th=[11600], 00:26:01.700 | 99.00th=[12780], 99.50th=[13566], 99.90th=[16581], 99.95th=[16909], 00:26:01.700 | 99.99th=[16909] 00:26:01.700 bw ( KiB/s): min=73920, max=85184, per=88.98%, avg=78920.00, stdev=5066.95, samples=4 00:26:01.700 iops : min= 4620, max= 5324, avg=4932.50, stdev=316.68, samples=4 00:26:01.700 lat (msec) : 4=0.66%, 10=75.69%, 20=23.65% 00:26:01.700 cpu : usr=84.44%, sys=13.87%, ctx=15, majf=0, minf=24 00:26:01.700 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:01.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:01.700 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:01.700 issued rwts: total=18973,9901,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:01.700 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:01.700 00:26:01.700 Run status group 0 (all jobs): 00:26:01.700 READ: bw=148MiB/s (155MB/s), 148MiB/s-148MiB/s (155MB/s-155MB/s), io=296MiB (311MB), run=2005-2005msec 00:26:01.700 WRITE: bw=86.6MiB/s (90.8MB/s), 86.6MiB/s-86.6MiB/s (90.8MB/s-90.8MB/s), io=155MiB (162MB), run=1786-1786msec 00:26:01.700 08:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:01.700 08:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:26:01.700 08:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:01.700 08:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:26:01.700 08:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:26:01.700 08:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:01.700 08:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:26:01.700 08:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:01.700 08:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:26:01.700 08:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:01.700 08:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:01.700 rmmod nvme_tcp 00:26:01.700 rmmod nvme_fabrics 00:26:01.700 rmmod nvme_keyring 00:26:01.700 08:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:01.700 08:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:26:01.700 08:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:26:01.700 08:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2077521 ']' 00:26:01.700 08:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2077521 00:26:01.700 08:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2077521 ']' 00:26:01.700 08:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2077521 00:26:01.700 08:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:26:01.700 08:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:01.700 08:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2077521 00:26:01.961 08:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:01.961 08:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:01.961 08:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2077521' 00:26:01.961 killing process with pid 2077521 00:26:01.961 08:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2077521 00:26:01.961 08:24:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2077521 00:26:01.961 08:24:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:01.961 08:24:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:01.961 08:24:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:01.961 08:24:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:26:01.961 08:24:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:26:01.961 08:24:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:01.961 08:24:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:26:01.961 08:24:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:01.961 08:24:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:01.961 08:24:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:01.961 08:24:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:01.961 08:24:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.506 08:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:04.506 00:26:04.506 real 0m18.025s 00:26:04.506 user 1m2.797s 00:26:04.506 sys 0m7.835s 00:26:04.506 08:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:04.506 08:25:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.506 ************************************ 00:26:04.506 END TEST nvmf_fio_host 00:26:04.506 ************************************ 00:26:04.506 08:25:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:04.506 08:25:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:04.506 08:25:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:04.506 08:25:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.506 ************************************ 00:26:04.506 START TEST nvmf_failover 00:26:04.506 ************************************ 00:26:04.506 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:04.506 * Looking for test storage... 00:26:04.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:04.506 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:04.506 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:26:04.506 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:04.506 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:04.506 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:04.506 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:04.506 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:04.506 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:26:04.506 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:26:04.506 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:26:04.506 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:26:04.506 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:26:04.506 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:26:04.506 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:26:04.506 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:04.506 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:26:04.506 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:26:04.506 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:04.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.507 --rc genhtml_branch_coverage=1 00:26:04.507 --rc genhtml_function_coverage=1 00:26:04.507 --rc genhtml_legend=1 00:26:04.507 --rc geninfo_all_blocks=1 00:26:04.507 --rc geninfo_unexecuted_blocks=1 00:26:04.507 00:26:04.507 ' 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:04.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.507 --rc genhtml_branch_coverage=1 00:26:04.507 --rc genhtml_function_coverage=1 00:26:04.507 --rc genhtml_legend=1 00:26:04.507 --rc geninfo_all_blocks=1 00:26:04.507 --rc geninfo_unexecuted_blocks=1 00:26:04.507 00:26:04.507 ' 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:04.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.507 --rc genhtml_branch_coverage=1 00:26:04.507 --rc genhtml_function_coverage=1 00:26:04.507 --rc genhtml_legend=1 00:26:04.507 --rc geninfo_all_blocks=1 00:26:04.507 --rc geninfo_unexecuted_blocks=1 00:26:04.507 00:26:04.507 ' 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:04.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.507 --rc genhtml_branch_coverage=1 00:26:04.507 --rc genhtml_function_coverage=1 00:26:04.507 --rc genhtml_legend=1 00:26:04.507 --rc geninfo_all_blocks=1 00:26:04.507 --rc geninfo_unexecuted_blocks=1 00:26:04.507 00:26:04.507 ' 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:04.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:26:04.507 08:25:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:12.648 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:12.648 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:12.649 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:12.649 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:12.649 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:12.649 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:12.649 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.711 ms 00:26:12.649 00:26:12.649 --- 10.0.0.2 ping statistics --- 00:26:12.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.649 rtt min/avg/max/mdev = 0.711/0.711/0.711/0.000 ms 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:12.649 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:12.649 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:26:12.649 00:26:12.649 --- 10.0.0.1 ping statistics --- 00:26:12.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.649 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:26:12.649 08:25:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:12.649 08:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:26:12.649 08:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:12.649 08:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:12.649 08:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:12.649 08:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:12.649 08:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:12.649 08:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:12.649 08:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:12.649 08:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:26:12.649 08:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:12.649 08:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:12.649 08:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:12.649 08:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2083547 00:26:12.649 08:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2083547 00:26:12.650 08:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:12.650 08:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2083547 ']' 00:26:12.650 08:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:12.650 08:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:12.650 08:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:12.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:12.650 08:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:12.650 08:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:12.650 [2024-11-28 08:25:09.116553] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:26:12.650 [2024-11-28 08:25:09.116616] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:12.650 [2024-11-28 08:25:09.218527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:12.650 [2024-11-28 08:25:09.270310] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:12.650 [2024-11-28 08:25:09.270368] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:12.650 [2024-11-28 08:25:09.270377] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:12.650 [2024-11-28 08:25:09.270385] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:12.650 [2024-11-28 08:25:09.270391] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:12.650 [2024-11-28 08:25:09.272268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:12.650 [2024-11-28 08:25:09.272581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:12.650 [2024-11-28 08:25:09.272581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:12.910 08:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:12.910 08:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:12.910 08:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:12.910 08:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:12.910 08:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:12.910 08:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:12.910 08:25:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:12.910 [2024-11-28 08:25:10.169689] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:13.171 08:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:13.171 Malloc0 00:26:13.171 08:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:13.432 08:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:13.693 08:25:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:13.953 [2024-11-28 08:25:10.997217] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.953 08:25:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:13.953 [2024-11-28 08:25:11.193734] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:13.953 08:25:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:14.215 [2024-11-28 08:25:11.386433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:14.215 08:25:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:26:14.215 08:25:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2084004 00:26:14.215 08:25:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:14.215 08:25:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2084004 /var/tmp/bdevperf.sock 00:26:14.215 08:25:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2084004 ']' 00:26:14.215 08:25:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:14.215 08:25:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:14.215 08:25:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:14.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:14.215 08:25:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:14.215 08:25:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:15.157 08:25:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:15.157 08:25:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:15.157 08:25:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:15.417 NVMe0n1 00:26:15.417 08:25:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:15.678 00:26:15.678 08:25:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:15.678 08:25:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2084253 00:26:15.678 08:25:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:26:17.066 08:25:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:17.066 [2024-11-28 08:25:14.078992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.066 [2024-11-28 08:25:14.079045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.066 [2024-11-28 08:25:14.079051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.066 [2024-11-28 08:25:14.079056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.066 [2024-11-28 08:25:14.079060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.066 [2024-11-28 08:25:14.079066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.066 [2024-11-28 08:25:14.079071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.066 [2024-11-28 08:25:14.079075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.066 [2024-11-28 08:25:14.079079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.066 [2024-11-28 08:25:14.079084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.066 [2024-11-28 08:25:14.079088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.066 [2024-11-28 08:25:14.079093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.066 [2024-11-28 08:25:14.079097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.066 [2024-11-28 08:25:14.079102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.066 [2024-11-28 08:25:14.079106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.066 [2024-11-28 08:25:14.079111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.066 [2024-11-28 08:25:14.079115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.066 [2024-11-28 08:25:14.079120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.066 [2024-11-28 08:25:14.079124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.066 [2024-11-28 08:25:14.079128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.066 [2024-11-28 08:25:14.079138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.066 [2024-11-28 08:25:14.079142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.066 [2024-11-28 08:25:14.079147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.066 [2024-11-28 08:25:14.079152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.066 [2024-11-28 08:25:14.079156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.066 [2024-11-28 08:25:14.079164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.066 [2024-11-28 08:25:14.079169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.066 [2024-11-28 08:25:14.079174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.066 [2024-11-28 08:25:14.079178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.066 [2024-11-28 08:25:14.079183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.066 [2024-11-28 08:25:14.079187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.066 [2024-11-28 08:25:14.079192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.066 [2024-11-28 08:25:14.079197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.066 [2024-11-28 08:25:14.079201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.066 [2024-11-28 08:25:14.079205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.066 [2024-11-28 08:25:14.079210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 [2024-11-28 08:25:14.079383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f6fed0 is same with the state(6) to be set 00:26:17.067 08:25:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:26:20.365 08:25:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:20.365 00:26:20.365 08:25:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:20.626 [2024-11-28 08:25:17.671134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.626 [2024-11-28 08:25:17.671385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.627 [2024-11-28 08:25:17.671389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.627 [2024-11-28 08:25:17.671394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.627 [2024-11-28 08:25:17.671398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.627 [2024-11-28 08:25:17.671402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.627 [2024-11-28 08:25:17.671407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.627 [2024-11-28 08:25:17.671412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.627 [2024-11-28 08:25:17.671417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.627 [2024-11-28 08:25:17.671422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.627 [2024-11-28 08:25:17.671426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.627 [2024-11-28 08:25:17.671430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.627 [2024-11-28 08:25:17.671435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.627 [2024-11-28 08:25:17.671439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.627 [2024-11-28 08:25:17.671443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.627 [2024-11-28 08:25:17.671448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.627 [2024-11-28 08:25:17.671453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.627 [2024-11-28 08:25:17.671457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f70cf0 is same with the state(6) to be set 00:26:20.627 08:25:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:26:23.922 08:25:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:23.922 [2024-11-28 08:25:20.866433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:23.922 08:25:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:26:24.866 08:25:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:24.866 [2024-11-28 08:25:22.063552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.866 [2024-11-28 08:25:22.063897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.867 [2024-11-28 08:25:22.063902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.867 [2024-11-28 08:25:22.063906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.867 [2024-11-28 08:25:22.063911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.867 [2024-11-28 08:25:22.063916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.867 [2024-11-28 08:25:22.063920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.867 [2024-11-28 08:25:22.063925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.867 [2024-11-28 08:25:22.063929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.867 [2024-11-28 08:25:22.063934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.867 [2024-11-28 08:25:22.063938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.867 [2024-11-28 08:25:22.063943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.867 [2024-11-28 08:25:22.063947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.867 [2024-11-28 08:25:22.063952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.867 [2024-11-28 08:25:22.063956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.867 [2024-11-28 08:25:22.063960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.867 [2024-11-28 08:25:22.063965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.867 [2024-11-28 08:25:22.063970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.867 [2024-11-28 08:25:22.063975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.867 [2024-11-28 08:25:22.063980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.867 [2024-11-28 08:25:22.063984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.867 [2024-11-28 08:25:22.063989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.867 [2024-11-28 08:25:22.063993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.867 [2024-11-28 08:25:22.063998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.867 [2024-11-28 08:25:22.064003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f71bf0 is same with the state(6) to be set 00:26:24.867 08:25:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2084253 00:26:31.461 { 00:26:31.461 "results": [ 00:26:31.461 { 00:26:31.461 "job": "NVMe0n1", 00:26:31.461 "core_mask": "0x1", 00:26:31.461 "workload": "verify", 00:26:31.461 "status": "finished", 00:26:31.461 "verify_range": { 00:26:31.461 "start": 0, 00:26:31.461 "length": 16384 00:26:31.461 }, 00:26:31.461 "queue_depth": 128, 00:26:31.461 "io_size": 4096, 00:26:31.461 "runtime": 15.007518, 00:26:31.461 "iops": 12360.9380311921, 00:26:31.461 "mibps": 48.28491418434414, 00:26:31.461 "io_failed": 7637, 00:26:31.461 "io_timeout": 0, 00:26:31.461 "avg_latency_us": 9924.406319015865, 00:26:31.461 "min_latency_us": 384.0, 00:26:31.461 "max_latency_us": 20971.52 00:26:31.461 } 00:26:31.461 ], 00:26:31.461 "core_count": 1 00:26:31.461 } 00:26:31.461 08:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2084004 00:26:31.461 08:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2084004 ']' 00:26:31.461 08:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2084004 00:26:31.461 08:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:31.461 08:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:31.461 08:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2084004 00:26:31.461 08:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:31.461 08:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:31.461 08:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2084004' 00:26:31.461 killing process with pid 2084004 00:26:31.461 08:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2084004 00:26:31.461 08:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2084004 00:26:31.461 08:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:31.461 [2024-11-28 08:25:11.458753] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:26:31.461 [2024-11-28 08:25:11.458891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2084004 ] 00:26:31.461 [2024-11-28 08:25:11.558232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.461 [2024-11-28 08:25:11.611542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.461 Running I/O for 15 seconds... 00:26:31.461 10696.00 IOPS, 41.78 MiB/s [2024-11-28T07:25:28.750Z] [2024-11-28 08:25:14.080559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.461 [2024-11-28 08:25:14.080592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.461 [2024-11-28 08:25:14.080609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:92136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.461 [2024-11-28 08:25:14.080618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.461 [2024-11-28 08:25:14.080628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.461 [2024-11-28 08:25:14.080635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.461 [2024-11-28 08:25:14.080645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:92152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.461 [2024-11-28 08:25:14.080653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.461 [2024-11-28 08:25:14.080663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.461 [2024-11-28 08:25:14.080670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.461 [2024-11-28 08:25:14.080680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:92168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.461 [2024-11-28 08:25:14.080687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.461 [2024-11-28 08:25:14.080697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:92176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.461 [2024-11-28 08:25:14.080705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.461 [2024-11-28 08:25:14.080715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:92184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.461 [2024-11-28 08:25:14.080722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.461 [2024-11-28 08:25:14.080731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.461 [2024-11-28 08:25:14.080739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.461 [2024-11-28 08:25:14.080748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:92200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.461 [2024-11-28 08:25:14.080756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.461 [2024-11-28 08:25:14.080766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:92208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.461 [2024-11-28 08:25:14.080773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.461 [2024-11-28 08:25:14.080787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.461 [2024-11-28 08:25:14.080795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.461 [2024-11-28 08:25:14.080805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.461 [2024-11-28 08:25:14.080813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.461 [2024-11-28 08:25:14.080822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.461 [2024-11-28 08:25:14.080829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.461 [2024-11-28 08:25:14.080839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:92240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.461 [2024-11-28 08:25:14.080847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.461 [2024-11-28 08:25:14.080857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:92248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.461 [2024-11-28 08:25:14.080864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.461 [2024-11-28 08:25:14.080874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:92256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.461 [2024-11-28 08:25:14.080881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.461 [2024-11-28 08:25:14.080891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:92264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.461 [2024-11-28 08:25:14.080899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.461 [2024-11-28 08:25:14.080908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:92272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.461 [2024-11-28 08:25:14.080915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.461 [2024-11-28 08:25:14.080925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.461 [2024-11-28 08:25:14.080932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.461 [2024-11-28 08:25:14.080942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.461 [2024-11-28 08:25:14.080949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.080959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:92296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.080966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.080975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:92304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.080983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.080992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:92312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.081011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:92320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.081028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.081045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:92336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.081062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:92344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.081079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.081096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:92360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.081113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.081129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.081146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.081167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.081184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.081201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.081220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.081237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.081254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:92432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.081271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:92440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.081288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.081305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:92456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.081322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:92464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.081338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:92472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.081355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:92480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.081372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:92488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.081388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:92496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.081405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.081421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:92512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.081440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:92520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.081457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:92528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.081474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:92536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.081491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:92544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.081507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:92552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.081524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:92560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.081540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:92568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.081557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:92576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.081573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.081590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:92592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.462 [2024-11-28 08:25:14.081607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:92600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.462 [2024-11-28 08:25:14.081614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.081623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:92608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.463 [2024-11-28 08:25:14.081631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.081640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:92616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.463 [2024-11-28 08:25:14.081652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.081661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:92624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.463 [2024-11-28 08:25:14.081668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.081678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:92632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.463 [2024-11-28 08:25:14.081686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.081695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.463 [2024-11-28 08:25:14.081703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.081713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:92648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.463 [2024-11-28 08:25:14.081720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.081730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.463 [2024-11-28 08:25:14.081737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.081747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:92664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.463 [2024-11-28 08:25:14.081754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.081763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.463 [2024-11-28 08:25:14.081771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.081780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:92680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.463 [2024-11-28 08:25:14.081788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.081797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.463 [2024-11-28 08:25:14.081804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.081814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:92696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.463 [2024-11-28 08:25:14.081821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.081830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:92704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.463 [2024-11-28 08:25:14.081838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.081847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.463 [2024-11-28 08:25:14.081854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.081865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:92720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.463 [2024-11-28 08:25:14.081872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.081882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.463 [2024-11-28 08:25:14.081889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.081899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:92736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.463 [2024-11-28 08:25:14.081906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.081916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.463 [2024-11-28 08:25:14.081923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.081932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:92752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.463 [2024-11-28 08:25:14.081939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.081949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:92760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.463 [2024-11-28 08:25:14.081956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.081965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:92768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.463 [2024-11-28 08:25:14.081972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.081982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:92776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.463 [2024-11-28 08:25:14.081989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.081998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.463 [2024-11-28 08:25:14.082005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.082015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.463 [2024-11-28 08:25:14.082022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.082031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.463 [2024-11-28 08:25:14.082039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.082048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.463 [2024-11-28 08:25:14.082056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.082066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.463 [2024-11-28 08:25:14.082074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.082084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.463 [2024-11-28 08:25:14.082091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.082100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.463 [2024-11-28 08:25:14.082107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.082117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:92840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.463 [2024-11-28 08:25:14.082124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.082133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.463 [2024-11-28 08:25:14.082140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.082150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.463 [2024-11-28 08:25:14.082161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.082171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.463 [2024-11-28 08:25:14.082178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.082187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.463 [2024-11-28 08:25:14.082194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.082203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.463 [2024-11-28 08:25:14.082211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.082220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.463 [2024-11-28 08:25:14.082227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.082236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:92896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.463 [2024-11-28 08:25:14.082244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.082253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:92904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.463 [2024-11-28 08:25:14.082261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.082270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:92912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.463 [2024-11-28 08:25:14.082277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.463 [2024-11-28 08:25:14.082288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.463 [2024-11-28 08:25:14.082296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:14.082305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:92928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.464 [2024-11-28 08:25:14.082313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:14.082322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:92936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.464 [2024-11-28 08:25:14.082329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:14.082338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:92944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.464 [2024-11-28 08:25:14.082346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:14.082355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:92952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.464 [2024-11-28 08:25:14.082362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:14.082372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.464 [2024-11-28 08:25:14.082379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:14.082388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:92968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.464 [2024-11-28 08:25:14.082396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:14.082405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:92976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.464 [2024-11-28 08:25:14.082412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:14.082422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.464 [2024-11-28 08:25:14.082429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:14.082438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:92992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.464 [2024-11-28 08:25:14.082445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:14.082454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:93000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.464 [2024-11-28 08:25:14.082462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:14.082471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:93008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.464 [2024-11-28 08:25:14.082479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:14.082488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:93016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.464 [2024-11-28 08:25:14.082495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:14.082505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:93024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.464 [2024-11-28 08:25:14.082512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:14.082522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:93032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.464 [2024-11-28 08:25:14.082529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:14.082538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:93040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.464 [2024-11-28 08:25:14.082546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:14.082555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:93048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.464 [2024-11-28 08:25:14.082562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:14.082571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.464 [2024-11-28 08:25:14.082579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:14.082588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:93064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.464 [2024-11-28 08:25:14.082595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:14.082604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:93072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.464 [2024-11-28 08:25:14.082611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:14.082620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:93080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.464 [2024-11-28 08:25:14.082628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:14.082637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.464 [2024-11-28 08:25:14.082644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:14.082653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:93096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.464 [2024-11-28 08:25:14.082661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:14.082670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:93104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.464 [2024-11-28 08:25:14.082677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:14.082686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:93112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.464 [2024-11-28 08:25:14.082694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:14.082703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:93120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.464 [2024-11-28 08:25:14.082712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:14.082721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:93128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.464 [2024-11-28 08:25:14.082728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:14.082737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:93136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.464 [2024-11-28 08:25:14.082745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:14.082769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:31.464 [2024-11-28 08:25:14.082776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.464 [2024-11-28 08:25:14.082783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93144 len:8 PRP1 0x0 PRP2 0x0 00:26:31.464 [2024-11-28 08:25:14.082791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:14.082829] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:31.464 [2024-11-28 08:25:14.082849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.464 [2024-11-28 08:25:14.082857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:14.082866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.464 [2024-11-28 08:25:14.082873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:14.082881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.464 [2024-11-28 08:25:14.082889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:14.082897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.464 [2024-11-28 08:25:14.082904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:14.082912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:31.464 [2024-11-28 08:25:14.086504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:31.464 [2024-11-28 08:25:14.086528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2487da0 (9): Bad file descriptor 00:26:31.464 [2024-11-28 08:25:14.153994] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:26:31.464 10574.00 IOPS, 41.30 MiB/s [2024-11-28T07:25:28.753Z] 10783.00 IOPS, 42.12 MiB/s [2024-11-28T07:25:28.753Z] 11269.50 IOPS, 44.02 MiB/s [2024-11-28T07:25:28.753Z] [2024-11-28 08:25:17.671705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:54248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.464 [2024-11-28 08:25:17.671735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:17.671748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:54256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.464 [2024-11-28 08:25:17.671754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:17.671769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:54264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.464 [2024-11-28 08:25:17.671774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:17.671781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:54272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.464 [2024-11-28 08:25:17.671787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.464 [2024-11-28 08:25:17.671793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:54280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.465 [2024-11-28 08:25:17.671799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.465 [2024-11-28 08:25:17.671806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:54288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.465 [2024-11-28 08:25:17.671811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.465 [2024-11-28 08:25:17.671818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:54296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.465 [2024-11-28 08:25:17.671823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.465 [2024-11-28 08:25:17.671830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:54304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.465 [2024-11-28 08:25:17.671835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.465 [2024-11-28 08:25:17.671841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:54312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.465 [2024-11-28 08:25:17.671846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.465 [2024-11-28 08:25:17.671853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:54320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.465 [2024-11-28 08:25:17.671858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.465 [2024-11-28 08:25:17.671864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:54328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.465 [2024-11-28 08:25:17.671869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.465 [2024-11-28 08:25:17.671876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:54336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.465 [2024-11-28 08:25:17.671881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.465 [2024-11-28 08:25:17.671887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:54344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.465 [2024-11-28 08:25:17.671893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.465 [2024-11-28 08:25:17.671900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:54352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.465 [2024-11-28 08:25:17.671905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.465 [2024-11-28 08:25:17.671912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:54360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.465 [2024-11-28 08:25:17.671918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.465 [2024-11-28 08:25:17.671925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:54368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.465 [2024-11-28 08:25:17.671930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.465 [2024-11-28 08:25:17.671936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:54376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.465 [2024-11-28 08:25:17.671941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.465 [2024-11-28 08:25:17.671948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:54384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.465 [2024-11-28 08:25:17.671953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.465 [2024-11-28 08:25:17.671960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:54392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.465 [2024-11-28 08:25:17.671965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.465 [2024-11-28 08:25:17.671971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:54400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.465 [2024-11-28 08:25:17.671977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.465 [2024-11-28 08:25:17.671983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:54408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.465 [2024-11-28 08:25:17.671988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.465 [2024-11-28 08:25:17.671995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:54416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.465 [2024-11-28 08:25:17.672000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.465 [2024-11-28 08:25:17.672007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:54424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.465 [2024-11-28 08:25:17.672013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.465 [2024-11-28 08:25:17.672019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:54432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.465 [2024-11-28 08:25:17.672024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.465 [2024-11-28 08:25:17.672031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:54440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.465 [2024-11-28 08:25:17.672036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.465 [2024-11-28 08:25:17.672043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:54448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.465 [2024-11-28 08:25:17.672048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.465 [2024-11-28 08:25:17.672055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:54456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.465 [2024-11-28 08:25:17.672060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.465 [2024-11-28 08:25:17.672068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:54464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.465 [2024-11-28 08:25:17.672073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.465 [2024-11-28 08:25:17.672079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:54472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.465 [2024-11-28 08:25:17.672085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.465 [2024-11-28 08:25:17.672091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:54480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.465 [2024-11-28 08:25:17.672096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.465 [2024-11-28 08:25:17.672103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:54488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.465 [2024-11-28 08:25:17.672108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.465 [2024-11-28 08:25:17.672114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:54496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.465 [2024-11-28 08:25:17.672119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.465 [2024-11-28 08:25:17.672126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:54504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.465 [2024-11-28 08:25:17.672132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.465 [2024-11-28 08:25:17.672139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:54512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.465 [2024-11-28 08:25:17.672144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.465 [2024-11-28 08:25:17.672151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:54520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.465 [2024-11-28 08:25:17.672156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.465 [2024-11-28 08:25:17.672167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:54528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.465 [2024-11-28 08:25:17.672172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.465 [2024-11-28 08:25:17.672178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:54536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.465 [2024-11-28 08:25:17.672184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:54544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:54560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:54568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:54584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:54592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:54600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:54608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:54616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:54624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:54640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:54648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:54656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:54664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:54672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:54680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:54688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:54696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:54704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:54712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:54720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:54728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:54736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:54744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:54752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:54760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:54776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:54784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:54792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:54800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:54808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:54816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:54824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:54832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:54840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:54848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.466 [2024-11-28 08:25:17.672649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:54856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.466 [2024-11-28 08:25:17.672654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.672660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:54864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.467 [2024-11-28 08:25:17.672666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.672673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:54872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.467 [2024-11-28 08:25:17.672678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.672684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.467 [2024-11-28 08:25:17.672690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.672697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:54888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.672702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.672709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:54896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.672714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.672720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.672726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.672732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:54912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.672737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.672744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:54920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.672749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.672755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:54928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.672760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.672767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:54936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.672771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.672778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:54944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.672783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.672789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:54952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.672794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.672800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:54960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.672805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.672813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.672818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.672824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.672829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.672835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:54984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.672840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.672847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.672851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.672858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:55000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.672863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.672869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:55008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.672874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.672880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:55016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.672886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.672892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:55024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.672897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.672904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:55032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.672909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.672915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:55040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.672920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.672927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:55048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.672932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.672938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:55056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.672943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.672950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:55064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.672956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.672963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:55072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.672968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.672974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:55080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.672979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.672986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:55088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.672991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.672998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:55096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.673003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.673009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:55104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.673014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.673020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:55112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.673025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.673032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:55120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.673037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.673043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:55128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.673048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.673054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:55136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.673060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.673067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.673072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.673078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:55152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.673083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.673089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:55160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.673094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.673101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:55168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.673107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.467 [2024-11-28 08:25:17.673114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:55176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.467 [2024-11-28 08:25:17.673119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.468 [2024-11-28 08:25:17.673125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:55184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.468 [2024-11-28 08:25:17.673130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.468 [2024-11-28 08:25:17.673137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:55192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.468 [2024-11-28 08:25:17.673142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.468 [2024-11-28 08:25:17.673148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:55200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.468 [2024-11-28 08:25:17.673153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.468 [2024-11-28 08:25:17.673164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:55208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.468 [2024-11-28 08:25:17.673169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.468 [2024-11-28 08:25:17.673176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:55216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.468 [2024-11-28 08:25:17.673181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.468 [2024-11-28 08:25:17.673187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:55224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.468 [2024-11-28 08:25:17.673192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.468 [2024-11-28 08:25:17.673199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:55232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.468 [2024-11-28 08:25:17.673204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.468 [2024-11-28 08:25:17.673210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:55240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.468 [2024-11-28 08:25:17.673216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.468 [2024-11-28 08:25:17.673222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:55248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.468 [2024-11-28 08:25:17.673227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.468 [2024-11-28 08:25:17.673233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:55256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.468 [2024-11-28 08:25:17.673239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.468 [2024-11-28 08:25:17.673255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:31.468 [2024-11-28 08:25:17.673260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.468 [2024-11-28 08:25:17.673265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55264 len:8 PRP1 0x0 PRP2 0x0 00:26:31.468 [2024-11-28 08:25:17.673272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.468 [2024-11-28 08:25:17.673305] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:31.468 [2024-11-28 08:25:17.673321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.468 [2024-11-28 08:25:17.673327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.468 [2024-11-28 08:25:17.673333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.468 [2024-11-28 08:25:17.673338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.468 [2024-11-28 08:25:17.673344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.468 [2024-11-28 08:25:17.673349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.468 [2024-11-28 08:25:17.673355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.468 [2024-11-28 08:25:17.673360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.468 [2024-11-28 08:25:17.673365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:31.468 [2024-11-28 08:25:17.675831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:31.468 [2024-11-28 08:25:17.675851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2487da0 (9): Bad file descriptor 00:26:31.468 [2024-11-28 08:25:17.701674] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:26:31.468 11458.20 IOPS, 44.76 MiB/s [2024-11-28T07:25:28.757Z] 11691.17 IOPS, 45.67 MiB/s [2024-11-28T07:25:28.757Z] 11859.29 IOPS, 46.33 MiB/s [2024-11-28T07:25:28.757Z] 11989.12 IOPS, 46.83 MiB/s [2024-11-28T07:25:28.757Z] 12086.22 IOPS, 47.21 MiB/s [2024-11-28T07:25:28.757Z] [2024-11-28 08:25:22.065818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:121808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-11-28 08:25:22.065848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.468 [2024-11-28 08:25:22.065860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:121816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-11-28 08:25:22.065866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.468 [2024-11-28 08:25:22.065873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:121824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-11-28 08:25:22.065879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.468 [2024-11-28 08:25:22.065886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:121832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-11-28 08:25:22.065891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.468 [2024-11-28 08:25:22.065898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:121840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-11-28 08:25:22.065904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.468 [2024-11-28 08:25:22.065910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:121848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-11-28 08:25:22.065919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.468 [2024-11-28 08:25:22.065926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:121856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-11-28 08:25:22.065931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.468 [2024-11-28 08:25:22.065938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:121864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-11-28 08:25:22.065943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.468 [2024-11-28 08:25:22.065949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:121872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-11-28 08:25:22.065954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.468 [2024-11-28 08:25:22.065961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:121880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-11-28 08:25:22.065966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.468 [2024-11-28 08:25:22.065973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:121888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-11-28 08:25:22.065978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.468 [2024-11-28 08:25:22.065985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:121896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-11-28 08:25:22.065990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.468 [2024-11-28 08:25:22.065996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:121904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-11-28 08:25:22.066001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.468 [2024-11-28 08:25:22.066008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:121912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-11-28 08:25:22.066013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.468 [2024-11-28 08:25:22.066019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:121920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-11-28 08:25:22.066024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.468 [2024-11-28 08:25:22.066031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:121928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-11-28 08:25:22.066036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.468 [2024-11-28 08:25:22.066043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:121936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-11-28 08:25:22.066048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.468 [2024-11-28 08:25:22.066054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:121944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-11-28 08:25:22.066060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.468 [2024-11-28 08:25:22.066068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-11-28 08:25:22.066073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.468 [2024-11-28 08:25:22.066080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:121960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-11-28 08:25:22.066085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.468 [2024-11-28 08:25:22.066091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:121968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.468 [2024-11-28 08:25:22.066096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:121976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.469 [2024-11-28 08:25:22.066108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:121984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.469 [2024-11-28 08:25:22.066119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:121992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.469 [2024-11-28 08:25:22.066131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:122000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.469 [2024-11-28 08:25:22.066143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:122008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.469 [2024-11-28 08:25:22.066155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:122016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.469 [2024-11-28 08:25:22.066172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:122024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.469 [2024-11-28 08:25:22.066184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:122032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.469 [2024-11-28 08:25:22.066196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.469 [2024-11-28 08:25:22.066207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:122048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.469 [2024-11-28 08:25:22.066220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:122112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.469 [2024-11-28 08:25:22.066232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:122120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.469 [2024-11-28 08:25:22.066244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:122128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.469 [2024-11-28 08:25:22.066256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:122136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.469 [2024-11-28 08:25:22.066267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:122144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.469 [2024-11-28 08:25:22.066279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:122152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.469 [2024-11-28 08:25:22.066291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.469 [2024-11-28 08:25:22.066303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:122168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.469 [2024-11-28 08:25:22.066314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:122176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.469 [2024-11-28 08:25:22.066326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:122184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.469 [2024-11-28 08:25:22.066337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:122192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.469 [2024-11-28 08:25:22.066349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:122200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.469 [2024-11-28 08:25:22.066361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:122208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.469 [2024-11-28 08:25:22.066373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:122216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.469 [2024-11-28 08:25:22.066385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:122224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.469 [2024-11-28 08:25:22.066396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:122232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.469 [2024-11-28 08:25:22.066408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:122240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.469 [2024-11-28 08:25:22.066420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:122248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.469 [2024-11-28 08:25:22.066432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:122256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.469 [2024-11-28 08:25:22.066443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:122264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.469 [2024-11-28 08:25:22.066455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:122272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.469 [2024-11-28 08:25:22.066467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:122280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.469 [2024-11-28 08:25:22.066478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:122288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.469 [2024-11-28 08:25:22.066490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:122296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.469 [2024-11-28 08:25:22.066501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:122304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.469 [2024-11-28 08:25:22.066517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:122312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.469 [2024-11-28 08:25:22.066528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:122056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.469 [2024-11-28 08:25:22.066540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:122064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.469 [2024-11-28 08:25:22.066552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.469 [2024-11-28 08:25:22.066563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.469 [2024-11-28 08:25:22.066570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:122080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.469 [2024-11-28 08:25:22.066575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.470 [2024-11-28 08:25:22.066582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:122088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.470 [2024-11-28 08:25:22.066587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.470 [2024-11-28 08:25:22.066593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:122096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.470 [2024-11-28 08:25:22.066599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.470 [2024-11-28 08:25:22.066605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:122104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.470 [2024-11-28 08:25:22.066610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.470 [2024-11-28 08:25:22.066616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:122320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.470 [2024-11-28 08:25:22.066624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.470 [2024-11-28 08:25:22.066630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:122328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.470 [2024-11-28 08:25:22.066635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.470 [2024-11-28 08:25:22.066641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:122336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.470 [2024-11-28 08:25:22.066647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.470 [2024-11-28 08:25:22.066653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:122344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.470 [2024-11-28 08:25:22.066658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.470 [2024-11-28 08:25:22.066666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:122352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.470 [2024-11-28 08:25:22.066671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.470 [2024-11-28 08:25:22.066677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:122360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.470 [2024-11-28 08:25:22.066682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.470 [2024-11-28 08:25:22.066688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:122368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.470 [2024-11-28 08:25:22.066694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.470 [2024-11-28 08:25:22.066700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:122376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.470 [2024-11-28 08:25:22.066705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.470 [2024-11-28 08:25:22.066711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:122384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.470 [2024-11-28 08:25:22.066716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.470 [2024-11-28 08:25:22.066723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:122392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.470 [2024-11-28 08:25:22.066728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.470 [2024-11-28 08:25:22.066734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:122400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.470 [2024-11-28 08:25:22.066739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.470 [2024-11-28 08:25:22.066745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:122408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.470 [2024-11-28 08:25:22.066750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.470 [2024-11-28 08:25:22.066756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.470 [2024-11-28 08:25:22.066762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.470 [2024-11-28 08:25:22.066768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:122424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.470 [2024-11-28 08:25:22.066773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.470 [2024-11-28 08:25:22.066779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:122432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.470 [2024-11-28 08:25:22.066784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.470 [2024-11-28 08:25:22.066790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:122440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.470 [2024-11-28 08:25:22.066796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.470 [2024-11-28 08:25:22.066802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:122448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.470 [2024-11-28 08:25:22.066808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.470 [2024-11-28 08:25:22.066815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:122456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.470 [2024-11-28 08:25:22.066820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.470 [2024-11-28 08:25:22.066826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:122464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.470 [2024-11-28 08:25:22.066831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.470 [2024-11-28 08:25:22.066837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:122472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.470 [2024-11-28 08:25:22.066842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.470 [2024-11-28 08:25:22.066848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:122480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.470 [2024-11-28 08:25:22.066853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.470 [2024-11-28 08:25:22.066860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:122488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.470 [2024-11-28 08:25:22.066865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.470 [2024-11-28 08:25:22.066871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:122496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.470 [2024-11-28 08:25:22.066876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.470 [2024-11-28 08:25:22.066882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:122504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.470 [2024-11-28 08:25:22.066887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.470 [2024-11-28 08:25:22.066893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.470 [2024-11-28 08:25:22.066898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.470 [2024-11-28 08:25:22.066905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:122520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.470 [2024-11-28 08:25:22.066910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.470 [2024-11-28 08:25:22.066916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:122528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.470 [2024-11-28 08:25:22.066921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.470 [2024-11-28 08:25:22.066927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:122536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.470 [2024-11-28 08:25:22.066932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.470 [2024-11-28 08:25:22.066938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:122544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.470 [2024-11-28 08:25:22.066944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.470 [2024-11-28 08:25:22.066950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:122552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.470 [2024-11-28 08:25:22.066956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.470 [2024-11-28 08:25:22.066962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:122560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.470 [2024-11-28 08:25:22.066967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.470 [2024-11-28 08:25:22.066973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:122568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.470 [2024-11-28 08:25:22.066978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.471 [2024-11-28 08:25:22.066995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.471 [2024-11-28 08:25:22.067002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122576 len:8 PRP1 0x0 PRP2 0x0 00:26:31.471 [2024-11-28 08:25:22.067007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.471 [2024-11-28 08:25:22.067015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:31.471 [2024-11-28 08:25:22.067019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.471 [2024-11-28 08:25:22.067023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122584 len:8 PRP1 0x0 PRP2 0x0 00:26:31.471 [2024-11-28 08:25:22.067028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.471 [2024-11-28 08:25:22.067033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:31.471 [2024-11-28 08:25:22.067037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.471 [2024-11-28 08:25:22.067041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122592 len:8 PRP1 0x0 PRP2 0x0 00:26:31.471 [2024-11-28 08:25:22.067046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.471 [2024-11-28 08:25:22.067052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:31.471 [2024-11-28 08:25:22.067055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.471 [2024-11-28 08:25:22.067059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122600 len:8 PRP1 0x0 PRP2 0x0 00:26:31.471 [2024-11-28 08:25:22.067064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.471 [2024-11-28 08:25:22.067070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:31.471 [2024-11-28 08:25:22.067074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.471 [2024-11-28 08:25:22.067078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122608 len:8 PRP1 0x0 PRP2 0x0 00:26:31.471 [2024-11-28 08:25:22.067083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.471 [2024-11-28 08:25:22.067088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:31.471 [2024-11-28 08:25:22.067092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.471 [2024-11-28 08:25:22.067096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122616 len:8 PRP1 0x0 PRP2 0x0 00:26:31.471 [2024-11-28 08:25:22.067101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.471 [2024-11-28 08:25:22.067106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:31.471 [2024-11-28 08:25:22.067112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.471 [2024-11-28 08:25:22.067116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122624 len:8 PRP1 0x0 PRP2 0x0 00:26:31.471 [2024-11-28 08:25:22.067121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.471 [2024-11-28 08:25:22.067126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:31.471 [2024-11-28 08:25:22.067130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.471 [2024-11-28 08:25:22.067134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122632 len:8 PRP1 0x0 PRP2 0x0 00:26:31.471 [2024-11-28 08:25:22.067139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.471 [2024-11-28 08:25:22.067144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:31.471 [2024-11-28 08:25:22.067148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.471 [2024-11-28 08:25:22.067152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122640 len:8 PRP1 0x0 PRP2 0x0 00:26:31.471 [2024-11-28 08:25:22.067157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.471 [2024-11-28 08:25:22.067173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:31.471 [2024-11-28 08:25:22.067177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.471 [2024-11-28 08:25:22.067181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122648 len:8 PRP1 0x0 PRP2 0x0 00:26:31.471 [2024-11-28 08:25:22.067186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.471 [2024-11-28 08:25:22.067191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:31.471 [2024-11-28 08:25:22.067195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.471 [2024-11-28 08:25:22.067199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122656 len:8 PRP1 0x0 PRP2 0x0 00:26:31.471 [2024-11-28 08:25:22.067204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.471 [2024-11-28 08:25:22.067209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:31.471 [2024-11-28 08:25:22.067213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.471 [2024-11-28 08:25:22.067217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122664 len:8 PRP1 0x0 PRP2 0x0 00:26:31.471 [2024-11-28 08:25:22.067222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.471 [2024-11-28 08:25:22.067227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:31.471 [2024-11-28 08:25:22.067231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.471 [2024-11-28 08:25:22.067235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122672 len:8 PRP1 0x0 PRP2 0x0 00:26:31.471 [2024-11-28 08:25:22.067240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.471 [2024-11-28 08:25:22.067245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:31.471 [2024-11-28 08:25:22.067249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.471 [2024-11-28 08:25:22.067253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122680 len:8 PRP1 0x0 PRP2 0x0 00:26:31.471 [2024-11-28 08:25:22.067258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.471 [2024-11-28 08:25:22.067265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:31.471 [2024-11-28 08:25:22.067268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.471 [2024-11-28 08:25:22.067273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122688 len:8 PRP1 0x0 PRP2 0x0 00:26:31.471 [2024-11-28 08:25:22.067277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.471 [2024-11-28 08:25:22.067283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:31.471 [2024-11-28 08:25:22.067286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.471 [2024-11-28 08:25:22.067291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122696 len:8 PRP1 0x0 PRP2 0x0 00:26:31.471 [2024-11-28 08:25:22.067296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.471 [2024-11-28 08:25:22.067301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:31.471 [2024-11-28 08:25:22.067305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.471 [2024-11-28 08:25:22.067311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122704 len:8 PRP1 0x0 PRP2 0x0 00:26:31.471 [2024-11-28 08:25:22.067316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.471 [2024-11-28 08:25:22.067321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:31.471 [2024-11-28 08:25:22.067325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.471 [2024-11-28 08:25:22.067329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122712 len:8 PRP1 0x0 PRP2 0x0 00:26:31.471 [2024-11-28 08:25:22.067334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.471 [2024-11-28 08:25:22.067339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:31.471 [2024-11-28 08:25:22.067343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.471 [2024-11-28 08:25:22.067347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122720 len:8 PRP1 0x0 PRP2 0x0 00:26:31.471 [2024-11-28 08:25:22.067352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.471 [2024-11-28 08:25:22.067357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:31.471 [2024-11-28 08:25:22.067361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.471 [2024-11-28 08:25:22.067366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122728 len:8 PRP1 0x0 PRP2 0x0 00:26:31.471 [2024-11-28 08:25:22.067371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.471 [2024-11-28 08:25:22.067376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:31.471 [2024-11-28 08:25:22.067380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.471 [2024-11-28 08:25:22.067384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122736 len:8 PRP1 0x0 PRP2 0x0 00:26:31.471 [2024-11-28 08:25:22.067389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.471 [2024-11-28 08:25:22.067394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:31.471 [2024-11-28 08:25:22.067398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.471 [2024-11-28 08:25:22.067402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122744 len:8 PRP1 0x0 PRP2 0x0 00:26:31.471 [2024-11-28 08:25:22.067408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.471 [2024-11-28 08:25:22.079347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:31.471 [2024-11-28 08:25:22.079374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.471 [2024-11-28 08:25:22.079384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122752 len:8 PRP1 0x0 PRP2 0x0 00:26:31.471 [2024-11-28 08:25:22.079393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.471 [2024-11-28 08:25:22.079401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:31.471 [2024-11-28 08:25:22.079406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.472 [2024-11-28 08:25:22.079412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122760 len:8 PRP1 0x0 PRP2 0x0 00:26:31.472 [2024-11-28 08:25:22.079419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.472 [2024-11-28 08:25:22.079426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:31.472 [2024-11-28 08:25:22.079431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.472 [2024-11-28 08:25:22.079437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122768 len:8 PRP1 0x0 PRP2 0x0 00:26:31.472 [2024-11-28 08:25:22.079444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.472 [2024-11-28 08:25:22.079451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:31.472 [2024-11-28 08:25:22.079457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.472 [2024-11-28 08:25:22.079462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122776 len:8 PRP1 0x0 PRP2 0x0 00:26:31.472 [2024-11-28 08:25:22.079469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.472 [2024-11-28 08:25:22.079476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:31.472 [2024-11-28 08:25:22.079481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.472 [2024-11-28 08:25:22.079487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122784 len:8 PRP1 0x0 PRP2 0x0 00:26:31.472 [2024-11-28 08:25:22.079493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.472 [2024-11-28 08:25:22.079500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:31.472 [2024-11-28 08:25:22.079506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.472 [2024-11-28 08:25:22.079512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122792 len:8 PRP1 0x0 PRP2 0x0 00:26:31.472 [2024-11-28 08:25:22.079518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.472 [2024-11-28 08:25:22.079525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:31.472 [2024-11-28 08:25:22.079530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.472 [2024-11-28 08:25:22.079536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122800 len:8 PRP1 0x0 PRP2 0x0 00:26:31.472 [2024-11-28 08:25:22.079543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.472 [2024-11-28 08:25:22.079550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:31.472 [2024-11-28 08:25:22.079555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.472 [2024-11-28 08:25:22.079565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122808 len:8 PRP1 0x0 PRP2 0x0 00:26:31.472 [2024-11-28 08:25:22.079572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.472 [2024-11-28 08:25:22.079580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:31.472 [2024-11-28 08:25:22.079585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.472 [2024-11-28 08:25:22.079590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122816 len:8 PRP1 0x0 PRP2 0x0 00:26:31.472 [2024-11-28 08:25:22.079597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.472 [2024-11-28 08:25:22.079604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:31.472 [2024-11-28 08:25:22.079609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:31.472 [2024-11-28 08:25:22.079615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122824 len:8 PRP1 0x0 PRP2 0x0 00:26:31.472 [2024-11-28 08:25:22.079621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.472 [2024-11-28 08:25:22.079662] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:31.472 [2024-11-28 08:25:22.079690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.472 [2024-11-28 08:25:22.079700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.472 [2024-11-28 08:25:22.079709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.472 [2024-11-28 08:25:22.079716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.472 [2024-11-28 08:25:22.079724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.472 [2024-11-28 08:25:22.079731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.472 [2024-11-28 08:25:22.079738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.472 [2024-11-28 08:25:22.079746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.472 [2024-11-28 08:25:22.079753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:31.472 [2024-11-28 08:25:22.079793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2487da0 (9): Bad file descriptor 00:26:31.472 [2024-11-28 08:25:22.083093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:31.472 [2024-11-28 08:25:22.147494] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:26:31.472 12069.50 IOPS, 47.15 MiB/s [2024-11-28T07:25:28.761Z] 12155.91 IOPS, 47.48 MiB/s [2024-11-28T07:25:28.761Z] 12212.08 IOPS, 47.70 MiB/s [2024-11-28T07:25:28.761Z] 12279.15 IOPS, 47.97 MiB/s [2024-11-28T07:25:28.761Z] 12319.00 IOPS, 48.12 MiB/s [2024-11-28T07:25:28.761Z] 12359.73 IOPS, 48.28 MiB/s 00:26:31.472 Latency(us) 00:26:31.472 [2024-11-28T07:25:28.761Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.472 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:31.472 Verification LBA range: start 0x0 length 0x4000 00:26:31.472 NVMe0n1 : 15.01 12360.94 48.28 508.88 0.00 9924.41 384.00 20971.52 00:26:31.472 [2024-11-28T07:25:28.761Z] =================================================================================================================== 00:26:31.472 [2024-11-28T07:25:28.761Z] Total : 12360.94 48.28 508.88 0.00 9924.41 384.00 20971.52 00:26:31.472 Received shutdown signal, test time was about 15.000000 seconds 00:26:31.472 00:26:31.472 Latency(us) 00:26:31.472 [2024-11-28T07:25:28.761Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.472 [2024-11-28T07:25:28.761Z] =================================================================================================================== 00:26:31.472 [2024-11-28T07:25:28.761Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:31.472 08:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:31.472 08:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:26:31.472 08:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:26:31.472 08:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2087262 00:26:31.472 08:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2087262 /var/tmp/bdevperf.sock 00:26:31.472 08:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:31.472 08:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2087262 ']' 00:26:31.472 08:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:31.472 08:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:31.472 08:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:31.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:31.472 08:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:31.472 08:25:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:32.045 08:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:32.045 08:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:32.045 08:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:32.045 [2024-11-28 08:25:29.259975] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:32.045 08:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:32.305 [2024-11-28 08:25:29.436424] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:32.305 08:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:32.566 NVMe0n1 00:26:32.566 08:25:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:32.826 00:26:33.134 08:25:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:33.134 00:26:33.449 08:25:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:33.449 08:25:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:26:33.449 08:25:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:33.730 08:25:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:26:37.030 08:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:37.030 08:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:26:37.030 08:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:37.030 08:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2088305 00:26:37.030 08:25:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2088305 00:26:37.975 { 00:26:37.975 "results": [ 00:26:37.975 { 00:26:37.975 "job": "NVMe0n1", 00:26:37.975 "core_mask": "0x1", 00:26:37.975 "workload": "verify", 00:26:37.975 "status": "finished", 00:26:37.975 "verify_range": { 00:26:37.975 "start": 0, 00:26:37.975 "length": 16384 00:26:37.975 }, 00:26:37.975 "queue_depth": 128, 00:26:37.975 "io_size": 4096, 00:26:37.975 "runtime": 1.007959, 00:26:37.975 "iops": 12669.166106954746, 00:26:37.975 "mibps": 49.48893010529198, 00:26:37.975 "io_failed": 0, 00:26:37.975 "io_timeout": 0, 00:26:37.975 "avg_latency_us": 10068.705729052466, 00:26:37.975 "min_latency_us": 1911.4666666666667, 00:26:37.975 "max_latency_us": 13161.813333333334 00:26:37.975 } 00:26:37.975 ], 00:26:37.975 "core_count": 1 00:26:37.975 } 00:26:37.975 08:25:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:37.975 [2024-11-28 08:25:28.302267] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:26:37.975 [2024-11-28 08:25:28.302325] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2087262 ] 00:26:37.975 [2024-11-28 08:25:28.386221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.975 [2024-11-28 08:25:28.415338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.975 [2024-11-28 08:25:30.736551] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:37.975 [2024-11-28 08:25:30.736593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.975 [2024-11-28 08:25:30.736601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.975 [2024-11-28 08:25:30.736609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.975 [2024-11-28 08:25:30.736614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.975 [2024-11-28 08:25:30.736620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.975 [2024-11-28 08:25:30.736625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.975 [2024-11-28 08:25:30.736631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:37.975 [2024-11-28 08:25:30.736636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:37.975 [2024-11-28 08:25:30.736641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:26:37.975 [2024-11-28 08:25:30.736662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:26:37.975 [2024-11-28 08:25:30.736673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2014da0 (9): Bad file descriptor 00:26:37.975 [2024-11-28 08:25:30.743154] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:26:37.975 Running I/O for 1 seconds... 00:26:37.975 12641.00 IOPS, 49.38 MiB/s 00:26:37.975 Latency(us) 00:26:37.975 [2024-11-28T07:25:35.264Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.975 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:37.975 Verification LBA range: start 0x0 length 0x4000 00:26:37.975 NVMe0n1 : 1.01 12669.17 49.49 0.00 0.00 10068.71 1911.47 13161.81 00:26:37.975 [2024-11-28T07:25:35.264Z] =================================================================================================================== 00:26:37.975 [2024-11-28T07:25:35.264Z] Total : 12669.17 49.49 0.00 0.00 10068.71 1911.47 13161.81 00:26:37.975 08:25:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:37.975 08:25:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:26:37.975 08:25:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:38.235 08:25:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:38.235 08:25:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:26:38.496 08:25:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:38.759 08:25:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:42.062 08:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:42.062 08:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:42.062 08:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2087262 00:26:42.062 08:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2087262 ']' 00:26:42.062 08:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2087262 00:26:42.063 08:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:42.063 08:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:42.063 08:25:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2087262 00:26:42.063 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:42.063 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:42.063 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2087262' 00:26:42.063 killing process with pid 2087262 00:26:42.063 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2087262 00:26:42.063 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2087262 00:26:42.063 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:42.063 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:42.063 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:42.063 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:42.063 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:42.063 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:42.063 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:26:42.323 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:42.323 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:26:42.323 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:42.323 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:42.323 rmmod nvme_tcp 00:26:42.323 rmmod nvme_fabrics 00:26:42.324 rmmod nvme_keyring 00:26:42.324 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:42.324 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:26:42.324 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:26:42.324 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2083547 ']' 00:26:42.324 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2083547 00:26:42.324 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2083547 ']' 00:26:42.324 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2083547 00:26:42.324 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:42.324 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:42.324 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2083547 00:26:42.324 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:42.324 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:42.324 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2083547' 00:26:42.324 killing process with pid 2083547 00:26:42.324 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2083547 00:26:42.324 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2083547 00:26:42.324 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:42.324 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:42.324 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:42.324 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:26:42.324 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:26:42.324 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:42.324 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:26:42.584 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:42.584 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:42.584 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.584 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:42.584 08:25:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.496 08:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:44.496 00:26:44.496 real 0m40.406s 00:26:44.496 user 2m4.043s 00:26:44.496 sys 0m8.807s 00:26:44.496 08:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:44.496 08:25:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:44.496 ************************************ 00:26:44.496 END TEST nvmf_failover 00:26:44.496 ************************************ 00:26:44.496 08:25:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:44.496 08:25:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:44.496 08:25:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:44.496 08:25:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.496 ************************************ 00:26:44.496 START TEST nvmf_host_discovery 00:26:44.496 ************************************ 00:26:44.496 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:44.757 * Looking for test storage... 00:26:44.757 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:44.757 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:44.757 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:26:44.757 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:44.757 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:44.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.758 --rc genhtml_branch_coverage=1 00:26:44.758 --rc genhtml_function_coverage=1 00:26:44.758 --rc genhtml_legend=1 00:26:44.758 --rc geninfo_all_blocks=1 00:26:44.758 --rc geninfo_unexecuted_blocks=1 00:26:44.758 00:26:44.758 ' 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:44.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.758 --rc genhtml_branch_coverage=1 00:26:44.758 --rc genhtml_function_coverage=1 00:26:44.758 --rc genhtml_legend=1 00:26:44.758 --rc geninfo_all_blocks=1 00:26:44.758 --rc geninfo_unexecuted_blocks=1 00:26:44.758 00:26:44.758 ' 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:44.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.758 --rc genhtml_branch_coverage=1 00:26:44.758 --rc genhtml_function_coverage=1 00:26:44.758 --rc genhtml_legend=1 00:26:44.758 --rc geninfo_all_blocks=1 00:26:44.758 --rc geninfo_unexecuted_blocks=1 00:26:44.758 00:26:44.758 ' 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:44.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.758 --rc genhtml_branch_coverage=1 00:26:44.758 --rc genhtml_function_coverage=1 00:26:44.758 --rc genhtml_legend=1 00:26:44.758 --rc geninfo_all_blocks=1 00:26:44.758 --rc geninfo_unexecuted_blocks=1 00:26:44.758 00:26:44.758 ' 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:44.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:44.758 08:25:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:44.758 08:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:44.758 08:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:44.758 08:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:44.758 08:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:44.758 08:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:44.758 08:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:44.758 08:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:44.759 08:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:44.759 08:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:44.759 08:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:44.759 08:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:44.759 08:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:44.759 08:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.759 08:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:44.759 08:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.759 08:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:44.759 08:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:44.759 08:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:26:44.759 08:25:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.899 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:52.899 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:26:52.899 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:52.899 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:52.899 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:52.899 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:52.899 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:52.899 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:26:52.899 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:52.899 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:26:52.899 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:52.900 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:52.900 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:52.900 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:52.900 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:52.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:52.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:26:52.900 00:26:52.900 --- 10.0.0.2 ping statistics --- 00:26:52.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:52.900 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:52.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:52.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:26:52.900 00:26:52.900 --- 10.0.0.1 ping statistics --- 00:26:52.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:52.900 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:52.900 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:52.901 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.901 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2093624 00:26:52.901 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2093624 00:26:52.901 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:52.901 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2093624 ']' 00:26:52.901 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:52.901 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:52.901 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:52.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:52.901 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:52.901 08:25:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.901 [2024-11-28 08:25:49.641725] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:26:52.901 [2024-11-28 08:25:49.641792] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:52.901 [2024-11-28 08:25:49.740182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.901 [2024-11-28 08:25:49.790542] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:52.901 [2024-11-28 08:25:49.790590] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:52.901 [2024-11-28 08:25:49.790599] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:52.901 [2024-11-28 08:25:49.790606] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:52.901 [2024-11-28 08:25:49.790612] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:52.901 [2024-11-28 08:25:49.791367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:53.474 08:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:53.474 08:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:53.474 08:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:53.474 08:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:53.474 08:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.474 08:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:53.474 08:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:53.474 08:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.474 08:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.474 [2024-11-28 08:25:50.506252] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:53.474 08:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.474 08:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:53.474 08:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.474 08:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.474 [2024-11-28 08:25:50.518537] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:53.474 08:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.474 08:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:53.474 08:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.474 08:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.474 null0 00:26:53.474 08:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.474 08:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:53.474 08:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.474 08:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.474 null1 00:26:53.474 08:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.474 08:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:53.474 08:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.474 08:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.474 08:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.474 08:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2093894 00:26:53.474 08:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:53.474 08:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2093894 /tmp/host.sock 00:26:53.474 08:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2093894 ']' 00:26:53.474 08:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:53.474 08:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:53.474 08:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:53.474 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:53.474 08:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:53.474 08:25:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.474 [2024-11-28 08:25:50.617070] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:26:53.474 [2024-11-28 08:25:50.617134] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2093894 ] 00:26:53.474 [2024-11-28 08:25:50.707967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.474 [2024-11-28 08:25:50.760717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:54.419 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.420 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:54.420 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.420 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:54.420 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:54.420 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:54.420 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:54.420 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.420 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.420 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:54.420 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:54.420 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.420 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:54.420 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:54.420 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.420 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.420 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.420 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:54.681 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:54.681 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:54.681 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.681 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.681 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:54.681 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:54.681 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.682 [2024-11-28 08:25:51.809835] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:54.682 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:54.943 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:54.943 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:54.943 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:54.943 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.943 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:54.943 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.943 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:54.943 08:25:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.943 08:25:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:26:54.943 08:25:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:55.513 [2024-11-28 08:25:52.515192] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:55.513 [2024-11-28 08:25:52.515224] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:55.513 [2024-11-28 08:25:52.515239] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:55.513 [2024-11-28 08:25:52.603502] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:55.513 [2024-11-28 08:25:52.786778] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:55.513 [2024-11-28 08:25:52.787981] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x12c37f0:1 started. 00:26:55.513 [2024-11-28 08:25:52.789896] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:55.513 [2024-11-28 08:25:52.789925] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:55.513 [2024-11-28 08:25:52.794687] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x12c37f0 was disconnected and freed. delete nvme_qpair. 00:26:55.773 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:55.773 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:55.773 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:55.773 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:55.773 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:55.773 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.773 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:55.773 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.773 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:55.773 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:56.034 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:56.035 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:56.035 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:56.035 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:56.035 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.035 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.035 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.035 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:56.035 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:56.035 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:56.035 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:56.035 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:56.035 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:56.035 [2024-11-28 08:25:53.251783] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x12c39d0:1 started. 00:26:56.035 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:56.035 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:56.035 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:56.035 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.035 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:56.035 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.035 [2024-11-28 08:25:53.255016] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x12c39d0 was disconnected and freed. delete nvme_qpair. 00:26:56.035 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.035 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:56.035 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:56.035 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:56.035 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:56.035 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:56.035 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:56.035 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:56.035 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:56.035 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:56.035 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:56.035 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:56.035 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:56.035 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.035 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.035 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.296 [2024-11-28 08:25:53.358389] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:56.296 [2024-11-28 08:25:53.359273] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:56.296 [2024-11-28 08:25:53.359304] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:56.296 [2024-11-28 08:25:53.447561] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.296 [2024-11-28 08:25:53.509488] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:26:56.296 [2024-11-28 08:25:53.509539] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:56.296 [2024-11-28 08:25:53.509550] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:56.296 [2024-11-28 08:25:53.509555] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:56.296 08:25:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:57.238 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:57.238 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:57.501 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:57.501 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:57.501 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:57.501 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.501 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.501 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:57.501 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:57.501 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.501 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:57.501 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:57.501 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:57.501 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:57.501 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:57.501 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:57.501 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:57.501 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:57.501 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:57.501 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:57.501 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:57.501 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.501 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.501 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:57.501 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.501 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:57.501 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:57.501 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:57.501 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:57.501 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:57.501 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.501 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.501 [2024-11-28 08:25:54.634530] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:57.501 [2024-11-28 08:25:54.634562] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:57.501 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.501 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:57.501 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:57.501 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:57.501 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:57.501 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:57.501 [2024-11-28 08:25:54.641339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.501 [2024-11-28 08:25:54.641362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.501 [2024-11-28 08:25:54.641373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.501 [2024-11-28 08:25:54.641382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.501 [2024-11-28 08:25:54.641390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.501 [2024-11-28 08:25:54.641398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.502 [2024-11-28 08:25:54.641406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.502 [2024-11-28 08:25:54.641414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.502 [2024-11-28 08:25:54.641422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1293e10 is same with the state(6) to be set 00:26:57.502 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:57.502 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:57.502 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:57.502 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.502 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:57.502 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.502 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:57.502 [2024-11-28 08:25:54.651350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1293e10 (9): Bad file descriptor 00:26:57.502 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.502 [2024-11-28 08:25:54.661388] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:57.502 [2024-11-28 08:25:54.661401] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:57.502 [2024-11-28 08:25:54.661406] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:57.502 [2024-11-28 08:25:54.661411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:57.502 [2024-11-28 08:25:54.661433] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:57.502 [2024-11-28 08:25:54.661815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.502 [2024-11-28 08:25:54.661832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1293e10 with addr=10.0.0.2, port=4420 00:26:57.502 [2024-11-28 08:25:54.661841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1293e10 is same with the state(6) to be set 00:26:57.502 [2024-11-28 08:25:54.661854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1293e10 (9): Bad file descriptor 00:26:57.502 [2024-11-28 08:25:54.661880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:57.502 [2024-11-28 08:25:54.661889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:57.502 [2024-11-28 08:25:54.661898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:57.502 [2024-11-28 08:25:54.661905] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:57.502 [2024-11-28 08:25:54.661911] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:57.502 [2024-11-28 08:25:54.661916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:57.502 [2024-11-28 08:25:54.671464] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:57.502 [2024-11-28 08:25:54.671476] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:57.502 [2024-11-28 08:25:54.671481] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:57.502 [2024-11-28 08:25:54.671486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:57.502 [2024-11-28 08:25:54.671501] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:57.502 [2024-11-28 08:25:54.671811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.502 [2024-11-28 08:25:54.671824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1293e10 with addr=10.0.0.2, port=4420 00:26:57.502 [2024-11-28 08:25:54.671832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1293e10 is same with the state(6) to be set 00:26:57.502 [2024-11-28 08:25:54.671843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1293e10 (9): Bad file descriptor 00:26:57.502 [2024-11-28 08:25:54.671861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:57.502 [2024-11-28 08:25:54.671868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:57.502 [2024-11-28 08:25:54.671875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:57.502 [2024-11-28 08:25:54.671881] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:57.502 [2024-11-28 08:25:54.671886] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:57.502 [2024-11-28 08:25:54.671891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:57.502 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.502 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:57.502 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:57.502 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:57.502 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:57.502 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:57.502 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:57.502 [2024-11-28 08:25:54.681533] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:57.502 [2024-11-28 08:25:54.681551] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:57.502 [2024-11-28 08:25:54.681556] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:57.502 [2024-11-28 08:25:54.681561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:57.502 [2024-11-28 08:25:54.681577] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:57.502 [2024-11-28 08:25:54.681869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.502 [2024-11-28 08:25:54.681885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1293e10 with addr=10.0.0.2, port=4420 00:26:57.502 [2024-11-28 08:25:54.681894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1293e10 is same with the state(6) to be set 00:26:57.502 [2024-11-28 08:25:54.681906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1293e10 (9): Bad file descriptor 00:26:57.502 [2024-11-28 08:25:54.681925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:57.502 [2024-11-28 08:25:54.681933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:57.502 [2024-11-28 08:25:54.681941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:57.502 [2024-11-28 08:25:54.681948] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:57.502 [2024-11-28 08:25:54.681953] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:57.502 [2024-11-28 08:25:54.681958] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:57.502 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:57.503 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:57.503 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:57.503 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.503 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:57.503 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.503 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:57.503 [2024-11-28 08:25:54.691609] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:57.503 [2024-11-28 08:25:54.691626] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:57.503 [2024-11-28 08:25:54.691631] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:57.503 [2024-11-28 08:25:54.691637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:57.503 [2024-11-28 08:25:54.691654] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:57.503 [2024-11-28 08:25:54.691862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.503 [2024-11-28 08:25:54.691880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1293e10 with addr=10.0.0.2, port=4420 00:26:57.503 [2024-11-28 08:25:54.691889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1293e10 is same with the state(6) to be set 00:26:57.503 [2024-11-28 08:25:54.691902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1293e10 (9): Bad file descriptor 00:26:57.503 [2024-11-28 08:25:54.691923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:57.503 [2024-11-28 08:25:54.691930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:57.503 [2024-11-28 08:25:54.691939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:57.503 [2024-11-28 08:25:54.691946] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:57.503 [2024-11-28 08:25:54.691951] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:57.503 [2024-11-28 08:25:54.691956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:57.503 [2024-11-28 08:25:54.701688] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:57.503 [2024-11-28 08:25:54.701701] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:57.503 [2024-11-28 08:25:54.701706] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:57.503 [2024-11-28 08:25:54.701712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:57.503 [2024-11-28 08:25:54.701728] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:57.503 [2024-11-28 08:25:54.702135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.503 [2024-11-28 08:25:54.702149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1293e10 with addr=10.0.0.2, port=4420 00:26:57.503 [2024-11-28 08:25:54.702157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1293e10 is same with the state(6) to be set 00:26:57.503 [2024-11-28 08:25:54.702174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1293e10 (9): Bad file descriptor 00:26:57.503 [2024-11-28 08:25:54.702199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:57.503 [2024-11-28 08:25:54.702207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:57.503 [2024-11-28 08:25:54.702222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:57.503 [2024-11-28 08:25:54.702229] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:57.503 [2024-11-28 08:25:54.702234] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:57.503 [2024-11-28 08:25:54.702239] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:57.503 [2024-11-28 08:25:54.711760] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:57.503 [2024-11-28 08:25:54.711772] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:57.503 [2024-11-28 08:25:54.711776] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:57.503 [2024-11-28 08:25:54.711781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:57.503 [2024-11-28 08:25:54.711796] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:57.503 [2024-11-28 08:25:54.712097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.503 [2024-11-28 08:25:54.712109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1293e10 with addr=10.0.0.2, port=4420 00:26:57.503 [2024-11-28 08:25:54.712116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1293e10 is same with the state(6) to be set 00:26:57.503 [2024-11-28 08:25:54.712128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1293e10 (9): Bad file descriptor 00:26:57.503 [2024-11-28 08:25:54.712139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:57.503 [2024-11-28 08:25:54.712145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:57.503 [2024-11-28 08:25:54.712153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:57.503 [2024-11-28 08:25:54.712165] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:57.503 [2024-11-28 08:25:54.712170] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:57.503 [2024-11-28 08:25:54.712175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:57.503 [2024-11-28 08:25:54.721828] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:57.503 [2024-11-28 08:25:54.721840] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:57.503 [2024-11-28 08:25:54.721844] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:57.503 [2024-11-28 08:25:54.721849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:57.503 [2024-11-28 08:25:54.721864] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:57.503 [2024-11-28 08:25:54.722478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.503 [2024-11-28 08:25:54.722542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1293e10 with addr=10.0.0.2, port=4420 00:26:57.503 [2024-11-28 08:25:54.722556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1293e10 is same with the state(6) to be set 00:26:57.503 [2024-11-28 08:25:54.722583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1293e10 (9): Bad file descriptor 00:26:57.503 [2024-11-28 08:25:54.722652] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:57.503 [2024-11-28 08:25:54.722681] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:57.503 [2024-11-28 08:25:54.722718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:57.503 [2024-11-28 08:25:54.722729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:57.503 [2024-11-28 08:25:54.722739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:57.504 [2024-11-28 08:25:54.722746] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:57.504 [2024-11-28 08:25:54.722752] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:57.504 [2024-11-28 08:25:54.722756] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:57.504 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.504 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:57.504 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:57.504 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:57.504 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:57.504 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:57.504 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:57.504 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:57.504 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:57.504 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:57.504 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:57.504 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.504 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.504 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:57.504 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:57.504 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:57.765 08:25:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.765 08:25:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:57.765 08:25:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:57.766 08:25:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:57.766 08:25:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:57.766 08:25:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:57.766 08:25:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.766 08:25:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:59.148 [2024-11-28 08:25:56.024253] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:59.148 [2024-11-28 08:25:56.024267] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:59.148 [2024-11-28 08:25:56.024276] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:59.148 [2024-11-28 08:25:56.151642] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:59.409 [2024-11-28 08:25:56.462083] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:26:59.409 [2024-11-28 08:25:56.462702] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1291700:1 started. 00:26:59.409 [2024-11-28 08:25:56.464018] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:59.409 [2024-11-28 08:25:56.464039] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:59.409 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.409 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:59.409 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:59.409 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:59.409 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:59.409 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:59.409 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:59.409 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:59.409 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:59.409 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:59.410 [2024-11-28 08:25:56.473193] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1291700 was disconnected and freed. delete nvme_qpair. 00:26:59.410 request: 00:26:59.410 { 00:26:59.410 "name": "nvme", 00:26:59.410 "trtype": "tcp", 00:26:59.410 "traddr": "10.0.0.2", 00:26:59.410 "adrfam": "ipv4", 00:26:59.410 "trsvcid": "8009", 00:26:59.410 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:59.410 "wait_for_attach": true, 00:26:59.410 "method": "bdev_nvme_start_discovery", 00:26:59.410 "req_id": 1 00:26:59.410 } 00:26:59.410 Got JSON-RPC error response 00:26:59.410 response: 00:26:59.410 { 00:26:59.410 "code": -17, 00:26:59.410 "message": "File exists" 00:26:59.410 } 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:59.410 request: 00:26:59.410 { 00:26:59.410 "name": "nvme_second", 00:26:59.410 "trtype": "tcp", 00:26:59.410 "traddr": "10.0.0.2", 00:26:59.410 "adrfam": "ipv4", 00:26:59.410 "trsvcid": "8009", 00:26:59.410 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:59.410 "wait_for_attach": true, 00:26:59.410 "method": "bdev_nvme_start_discovery", 00:26:59.410 "req_id": 1 00:26:59.410 } 00:26:59.410 Got JSON-RPC error response 00:26:59.410 response: 00:26:59.410 { 00:26:59.410 "code": -17, 00:26:59.410 "message": "File exists" 00:26:59.410 } 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:59.410 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.671 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:59.671 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:59.671 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:59.671 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:59.671 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:59.672 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:59.672 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:59.672 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:59.672 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:59.672 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.672 08:25:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:00.613 [2024-11-28 08:25:57.720519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.613 [2024-11-28 08:25:57.720541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1290e70 with addr=10.0.0.2, port=8010 00:27:00.613 [2024-11-28 08:25:57.720550] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:00.613 [2024-11-28 08:25:57.720555] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:00.613 [2024-11-28 08:25:57.720560] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:01.555 [2024-11-28 08:25:58.722868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.555 [2024-11-28 08:25:58.722886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1290e70 with addr=10.0.0.2, port=8010 00:27:01.555 [2024-11-28 08:25:58.722894] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:01.555 [2024-11-28 08:25:58.722899] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:01.555 [2024-11-28 08:25:58.722904] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:02.497 [2024-11-28 08:25:59.724876] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:27:02.497 request: 00:27:02.497 { 00:27:02.497 "name": "nvme_second", 00:27:02.497 "trtype": "tcp", 00:27:02.497 "traddr": "10.0.0.2", 00:27:02.497 "adrfam": "ipv4", 00:27:02.497 "trsvcid": "8010", 00:27:02.497 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:02.497 "wait_for_attach": false, 00:27:02.497 "attach_timeout_ms": 3000, 00:27:02.497 "method": "bdev_nvme_start_discovery", 00:27:02.497 "req_id": 1 00:27:02.497 } 00:27:02.497 Got JSON-RPC error response 00:27:02.497 response: 00:27:02.497 { 00:27:02.497 "code": -110, 00:27:02.497 "message": "Connection timed out" 00:27:02.497 } 00:27:02.497 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:02.497 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:27:02.497 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:02.497 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:02.497 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:02.497 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:27:02.497 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:02.497 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:02.497 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.497 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:02.497 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:02.497 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:02.497 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.497 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:27:02.497 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:27:02.497 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2093894 00:27:02.497 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:27:02.497 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:02.758 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:27:02.758 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:02.758 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:27:02.758 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:02.758 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:02.758 rmmod nvme_tcp 00:27:02.758 rmmod nvme_fabrics 00:27:02.758 rmmod nvme_keyring 00:27:02.758 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:02.758 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:27:02.758 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:27:02.758 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2093624 ']' 00:27:02.758 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2093624 00:27:02.758 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2093624 ']' 00:27:02.758 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2093624 00:27:02.758 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:27:02.758 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:02.758 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2093624 00:27:02.758 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:02.758 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:02.758 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2093624' 00:27:02.758 killing process with pid 2093624 00:27:02.758 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2093624 00:27:02.758 08:25:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2093624 00:27:02.758 08:26:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:02.758 08:26:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:02.758 08:26:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:02.758 08:26:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:27:02.758 08:26:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:27:02.758 08:26:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:02.758 08:26:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:27:02.758 08:26:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:02.758 08:26:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:02.758 08:26:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.758 08:26:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:02.758 08:26:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.306 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:05.306 00:27:05.306 real 0m20.327s 00:27:05.306 user 0m23.551s 00:27:05.306 sys 0m7.230s 00:27:05.306 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:05.306 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:05.306 ************************************ 00:27:05.306 END TEST nvmf_host_discovery 00:27:05.306 ************************************ 00:27:05.306 08:26:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:05.306 08:26:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:05.306 08:26:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:05.306 08:26:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.306 ************************************ 00:27:05.306 START TEST nvmf_host_multipath_status 00:27:05.306 ************************************ 00:27:05.306 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:05.306 * Looking for test storage... 00:27:05.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:05.306 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:05.306 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:27:05.306 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:05.306 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:05.306 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:05.306 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:05.306 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:05.306 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:27:05.306 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:27:05.306 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:05.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:05.307 --rc genhtml_branch_coverage=1 00:27:05.307 --rc genhtml_function_coverage=1 00:27:05.307 --rc genhtml_legend=1 00:27:05.307 --rc geninfo_all_blocks=1 00:27:05.307 --rc geninfo_unexecuted_blocks=1 00:27:05.307 00:27:05.307 ' 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:05.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:05.307 --rc genhtml_branch_coverage=1 00:27:05.307 --rc genhtml_function_coverage=1 00:27:05.307 --rc genhtml_legend=1 00:27:05.307 --rc geninfo_all_blocks=1 00:27:05.307 --rc geninfo_unexecuted_blocks=1 00:27:05.307 00:27:05.307 ' 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:05.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:05.307 --rc genhtml_branch_coverage=1 00:27:05.307 --rc genhtml_function_coverage=1 00:27:05.307 --rc genhtml_legend=1 00:27:05.307 --rc geninfo_all_blocks=1 00:27:05.307 --rc geninfo_unexecuted_blocks=1 00:27:05.307 00:27:05.307 ' 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:05.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:05.307 --rc genhtml_branch_coverage=1 00:27:05.307 --rc genhtml_function_coverage=1 00:27:05.307 --rc genhtml_legend=1 00:27:05.307 --rc geninfo_all_blocks=1 00:27:05.307 --rc geninfo_unexecuted_blocks=1 00:27:05.307 00:27:05.307 ' 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:05.307 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:05.307 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:05.308 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:05.308 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:05.308 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.308 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:05.308 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.308 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:05.308 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:05.308 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:27:05.308 08:26:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:13.456 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.456 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:13.457 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:13.457 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:13.457 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:13.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:13.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:27:13.457 00:27:13.457 --- 10.0.0.2 ping statistics --- 00:27:13.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.457 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:13.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:13.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:27:13.457 00:27:13.457 --- 10.0.0.1 ping statistics --- 00:27:13.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.457 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2099874 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2099874 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2099874 ']' 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:13.457 08:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:13.457 [2024-11-28 08:26:09.956706] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:27:13.457 [2024-11-28 08:26:09.956772] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:13.457 [2024-11-28 08:26:10.057041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:13.457 [2024-11-28 08:26:10.110152] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:13.457 [2024-11-28 08:26:10.110218] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:13.457 [2024-11-28 08:26:10.110228] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:13.457 [2024-11-28 08:26:10.110235] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:13.457 [2024-11-28 08:26:10.110241] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:13.457 [2024-11-28 08:26:10.111901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:13.457 [2024-11-28 08:26:10.111904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.719 08:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:13.719 08:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:27:13.719 08:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:13.719 08:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:13.719 08:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:13.719 08:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:13.719 08:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2099874 00:27:13.719 08:26:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:13.719 [2024-11-28 08:26:10.994599] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:13.980 08:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:13.980 Malloc0 00:27:13.980 08:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:14.241 08:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:14.503 08:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:14.765 [2024-11-28 08:26:11.812212] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:14.765 08:26:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:14.765 [2024-11-28 08:26:12.016730] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:15.026 08:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2100380 00:27:15.026 08:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:15.026 08:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:15.026 08:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2100380 /var/tmp/bdevperf.sock 00:27:15.026 08:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2100380 ']' 00:27:15.026 08:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:15.026 08:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:15.026 08:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:15.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:15.026 08:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:15.026 08:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:15.970 08:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:15.970 08:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:27:15.970 08:26:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:15.970 08:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:16.540 Nvme0n1 00:27:16.540 08:26:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:16.800 Nvme0n1 00:27:16.800 08:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:27:16.800 08:26:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:19.340 08:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:27:19.340 08:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:19.340 08:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:19.340 08:26:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:27:20.280 08:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:27:20.280 08:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:20.280 08:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.280 08:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:20.540 08:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:20.540 08:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:20.540 08:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.540 08:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:20.540 08:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:20.540 08:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:20.540 08:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.540 08:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:20.801 08:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:20.801 08:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:20.801 08:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.801 08:26:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:21.061 08:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:21.061 08:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:21.061 08:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.061 08:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:21.061 08:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:21.061 08:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:21.061 08:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.061 08:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:21.320 08:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:21.320 08:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:27:21.320 08:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:21.579 08:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:21.839 08:26:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:27:22.778 08:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:27:22.778 08:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:22.778 08:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:22.778 08:26:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:23.037 08:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:23.037 08:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:23.037 08:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.037 08:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:23.037 08:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.037 08:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:23.037 08:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.037 08:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:23.295 08:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.295 08:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:23.295 08:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.295 08:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:23.554 08:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.554 08:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:23.554 08:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.554 08:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:23.554 08:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.554 08:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:23.554 08:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.554 08:26:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:23.814 08:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.814 08:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:27:23.814 08:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:24.077 08:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:24.338 08:26:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:27:25.278 08:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:27:25.278 08:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:25.278 08:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.279 08:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:25.540 08:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:25.540 08:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:25.540 08:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.540 08:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:25.540 08:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:25.540 08:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:25.540 08:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.540 08:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:25.801 08:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:25.801 08:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:25.801 08:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.801 08:26:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:26.061 08:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:26.061 08:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:26.061 08:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:26.061 08:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:26.061 08:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:26.061 08:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:26.061 08:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:26.061 08:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:26.321 08:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:26.322 08:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:27:26.322 08:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:26.582 08:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:26.842 08:26:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:27:27.840 08:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:27:27.841 08:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:27.841 08:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:27.841 08:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:27.841 08:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:27.841 08:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:27.841 08:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:27.841 08:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:28.100 08:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:28.100 08:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:28.100 08:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.100 08:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:28.361 08:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:28.361 08:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:28.361 08:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.361 08:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:28.361 08:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:28.361 08:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:28.362 08:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.362 08:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:28.622 08:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:28.622 08:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:28.622 08:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.622 08:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:28.885 08:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:28.885 08:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:27:28.885 08:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:29.146 08:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:29.146 08:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:27:30.085 08:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:27:30.085 08:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:30.085 08:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:30.085 08:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:30.346 08:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:30.346 08:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:30.346 08:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:30.346 08:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:30.608 08:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:30.608 08:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:30.608 08:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:30.608 08:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:30.608 08:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:30.608 08:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:30.608 08:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:30.608 08:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:30.868 08:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:30.868 08:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:30.868 08:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:30.868 08:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:31.130 08:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:31.130 08:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:31.130 08:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:31.130 08:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:31.393 08:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:31.393 08:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:27:31.393 08:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:31.393 08:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:31.652 08:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:27:32.592 08:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:27:32.592 08:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:32.592 08:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:32.592 08:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:32.852 08:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:32.852 08:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:32.852 08:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:32.852 08:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.113 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:33.113 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:33.114 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.114 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:33.114 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:33.114 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:33.114 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.114 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:33.375 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:33.375 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:33.375 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.375 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:33.636 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:33.636 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:33.636 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:33.636 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:33.897 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:33.897 08:26:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:27:33.897 08:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:27:33.897 08:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:34.158 08:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:34.158 08:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:35.544 08:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:35.544 08:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:35.544 08:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:35.544 08:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:35.544 08:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:35.544 08:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:35.544 08:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:35.544 08:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:35.544 08:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:35.545 08:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:35.545 08:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:35.545 08:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:35.806 08:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:35.806 08:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:35.806 08:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:35.806 08:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:36.067 08:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:36.067 08:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:36.067 08:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:36.067 08:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:36.329 08:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:36.329 08:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:36.329 08:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:36.329 08:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:36.329 08:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:36.329 08:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:36.329 08:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:36.590 08:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:36.850 08:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:37.807 08:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:37.807 08:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:37.807 08:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:37.807 08:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:38.068 08:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:38.068 08:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:38.068 08:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:38.068 08:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:38.068 08:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:38.068 08:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:38.068 08:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:38.068 08:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:38.328 08:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:38.328 08:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:38.328 08:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:38.328 08:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:38.588 08:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:38.588 08:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:38.588 08:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:38.588 08:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:38.588 08:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:38.588 08:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:38.848 08:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:38.848 08:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:38.848 08:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:38.848 08:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:38.848 08:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:39.107 08:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:39.367 08:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:40.327 08:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:40.327 08:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:40.327 08:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.327 08:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:40.327 08:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:40.327 08:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:40.327 08:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.327 08:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:40.587 08:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:40.588 08:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:40.588 08:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.588 08:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:40.848 08:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:40.848 08:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:40.848 08:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.848 08:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:41.109 08:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:41.109 08:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:41.109 08:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:41.109 08:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:41.109 08:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:41.109 08:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:41.109 08:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:41.109 08:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:41.370 08:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:41.370 08:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:41.370 08:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:41.631 08:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:41.631 08:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:27:43.016 08:26:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:27:43.016 08:26:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:43.016 08:26:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:43.016 08:26:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:43.016 08:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:43.016 08:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:43.016 08:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:43.016 08:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:43.016 08:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:43.016 08:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:43.016 08:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:43.016 08:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:43.276 08:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:43.276 08:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:43.277 08:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:43.277 08:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:43.538 08:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:43.538 08:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:43.538 08:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:43.538 08:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:43.799 08:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:43.799 08:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:43.799 08:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:43.799 08:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:43.799 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:43.799 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2100380 00:27:43.799 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2100380 ']' 00:27:43.799 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2100380 00:27:43.799 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:43.799 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:43.799 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2100380 00:27:44.063 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:27:44.063 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:27:44.063 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2100380' 00:27:44.063 killing process with pid 2100380 00:27:44.063 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2100380 00:27:44.063 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2100380 00:27:44.063 { 00:27:44.063 "results": [ 00:27:44.063 { 00:27:44.063 "job": "Nvme0n1", 00:27:44.064 "core_mask": "0x4", 00:27:44.064 "workload": "verify", 00:27:44.064 "status": "terminated", 00:27:44.064 "verify_range": { 00:27:44.064 "start": 0, 00:27:44.064 "length": 16384 00:27:44.064 }, 00:27:44.064 "queue_depth": 128, 00:27:44.064 "io_size": 4096, 00:27:44.064 "runtime": 26.925825, 00:27:44.064 "iops": 11882.607125315566, 00:27:44.064 "mibps": 46.41643408326393, 00:27:44.064 "io_failed": 0, 00:27:44.064 "io_timeout": 0, 00:27:44.064 "avg_latency_us": 10751.611831468974, 00:27:44.064 "min_latency_us": 802.1333333333333, 00:27:44.064 "max_latency_us": 3019898.88 00:27:44.064 } 00:27:44.064 ], 00:27:44.064 "core_count": 1 00:27:44.064 } 00:27:44.064 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2100380 00:27:44.064 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:44.064 [2024-11-28 08:26:12.103945] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:27:44.064 [2024-11-28 08:26:12.104028] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2100380 ] 00:27:44.064 [2024-11-28 08:26:12.198059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.064 [2024-11-28 08:26:12.249283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:44.064 Running I/O for 90 seconds... 00:27:44.064 10874.00 IOPS, 42.48 MiB/s [2024-11-28T07:26:41.353Z] 10947.50 IOPS, 42.76 MiB/s [2024-11-28T07:26:41.353Z] 10991.67 IOPS, 42.94 MiB/s [2024-11-28T07:26:41.353Z] 11392.00 IOPS, 44.50 MiB/s [2024-11-28T07:26:41.353Z] 11730.80 IOPS, 45.82 MiB/s [2024-11-28T07:26:41.353Z] 11919.17 IOPS, 46.56 MiB/s [2024-11-28T07:26:41.353Z] 12056.00 IOPS, 47.09 MiB/s [2024-11-28T07:26:41.353Z] 12138.75 IOPS, 47.42 MiB/s [2024-11-28T07:26:41.353Z] 12202.44 IOPS, 47.67 MiB/s [2024-11-28T07:26:41.353Z] 12253.30 IOPS, 47.86 MiB/s [2024-11-28T07:26:41.353Z] 12311.27 IOPS, 48.09 MiB/s [2024-11-28T07:26:41.353Z] 12347.67 IOPS, 48.23 MiB/s [2024-11-28T07:26:41.353Z] [2024-11-28 08:26:26.172598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.064 [2024-11-28 08:26:26.172629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:44.064 [2024-11-28 08:26:26.172662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.064 [2024-11-28 08:26:26.172669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:44.064 [2024-11-28 08:26:26.172680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.064 [2024-11-28 08:26:26.172686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:44.064 [2024-11-28 08:26:26.172697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.064 [2024-11-28 08:26:26.172702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:44.064 [2024-11-28 08:26:26.172712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.064 [2024-11-28 08:26:26.172718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:44.064 [2024-11-28 08:26:26.172728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.064 [2024-11-28 08:26:26.172733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:44.064 [2024-11-28 08:26:26.172743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.064 [2024-11-28 08:26:26.172748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:44.064 [2024-11-28 08:26:26.172759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.064 [2024-11-28 08:26:26.172764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:44.064 [2024-11-28 08:26:26.172774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.064 [2024-11-28 08:26:26.172779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:44.064 [2024-11-28 08:26:26.172790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.064 [2024-11-28 08:26:26.172801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:44.064 [2024-11-28 08:26:26.172811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.064 [2024-11-28 08:26:26.172816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:44.064 [2024-11-28 08:26:26.172826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.064 [2024-11-28 08:26:26.172832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:44.064 [2024-11-28 08:26:26.172842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.064 [2024-11-28 08:26:26.172847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:44.064 [2024-11-28 08:26:26.172858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.064 [2024-11-28 08:26:26.172863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:44.064 [2024-11-28 08:26:26.172873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.064 [2024-11-28 08:26:26.172878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:44.064 [2024-11-28 08:26:26.172889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.064 [2024-11-28 08:26:26.172894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:44.064 [2024-11-28 08:26:26.172904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.064 [2024-11-28 08:26:26.172909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:44.064 [2024-11-28 08:26:26.172920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.064 [2024-11-28 08:26:26.172925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:44.064 [2024-11-28 08:26:26.172936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.064 [2024-11-28 08:26:26.172942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:44.064 [2024-11-28 08:26:26.172952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.064 [2024-11-28 08:26:26.172958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:44.064 [2024-11-28 08:26:26.172968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.064 [2024-11-28 08:26:26.172973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:44.064 [2024-11-28 08:26:26.172983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.064 [2024-11-28 08:26:26.172989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:44.064 [2024-11-28 08:26:26.173000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.064 [2024-11-28 08:26:26.173005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:44.064 [2024-11-28 08:26:26.173016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.064 [2024-11-28 08:26:26.173021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:44.064 [2024-11-28 08:26:26.174220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.064 [2024-11-28 08:26:26.174233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:44.064 [2024-11-28 08:26:26.174248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.064 [2024-11-28 08:26:26.174254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:44.064 [2024-11-28 08:26:26.174268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.174273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.174286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.174291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.174305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.174311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.174324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.174329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.174342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.174348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.174361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.174367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.174380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.174385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.174398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.174404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.174420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.174425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.174439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.174444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.174458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.174463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.174477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.174482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.174495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.174500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.174514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.174519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.174533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.174538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.174552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.174558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.174571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.174576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.174590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.174595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.174608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.174613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.174627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.174633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.174647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.174654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.174667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.174673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.174686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.174692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.174705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.174711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.174724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.174730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.174743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.174748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.174762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.174767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.174781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.174786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.174799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.174805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.174818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.174823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.174931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.174939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.174955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.174960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.174975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.174983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.174998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.175003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.175018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.175023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.175038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.175044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.175059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.175064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.175079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.175084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.175099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.065 [2024-11-28 08:26:26.175104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:44.065 [2024-11-28 08:26:26.175119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.066 [2024-11-28 08:26:26.175124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:44.066 [2024-11-28 08:26:26.175139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.066 [2024-11-28 08:26:26.175145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:44.066 [2024-11-28 08:26:26.175164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.066 [2024-11-28 08:26:26.175169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:44.066 [2024-11-28 08:26:26.175184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.066 [2024-11-28 08:26:26.175190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:44.066 [2024-11-28 08:26:26.175205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.066 [2024-11-28 08:26:26.175210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:44.066 [2024-11-28 08:26:26.175225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.066 [2024-11-28 08:26:26.175230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:44.066 [2024-11-28 08:26:26.175247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.066 [2024-11-28 08:26:26.175252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:44.066 [2024-11-28 08:26:26.175267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.066 [2024-11-28 08:26:26.175273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:44.066 [2024-11-28 08:26:26.175289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.066 [2024-11-28 08:26:26.175294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:44.066 [2024-11-28 08:26:26.175309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.066 [2024-11-28 08:26:26.175315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:44.066 [2024-11-28 08:26:26.175330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.066 [2024-11-28 08:26:26.175335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:44.066 [2024-11-28 08:26:26.175350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.066 [2024-11-28 08:26:26.175356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:44.066 [2024-11-28 08:26:26.175371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.066 [2024-11-28 08:26:26.175376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:44.066 [2024-11-28 08:26:26.175391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.066 [2024-11-28 08:26:26.175397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:44.066 [2024-11-28 08:26:26.175412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.066 [2024-11-28 08:26:26.175417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:44.066 [2024-11-28 08:26:26.175432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.066 [2024-11-28 08:26:26.175437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:44.066 [2024-11-28 08:26:26.175452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.066 [2024-11-28 08:26:26.175457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:44.066 [2024-11-28 08:26:26.175472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.066 [2024-11-28 08:26:26.175477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:44.066 [2024-11-28 08:26:26.175498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.066 [2024-11-28 08:26:26.175503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:44.066 11434.77 IOPS, 44.67 MiB/s [2024-11-28T07:26:41.355Z] 10618.00 IOPS, 41.48 MiB/s [2024-11-28T07:26:41.355Z] 9910.13 IOPS, 38.71 MiB/s [2024-11-28T07:26:41.355Z] 10062.44 IOPS, 39.31 MiB/s [2024-11-28T07:26:41.355Z] 10211.29 IOPS, 39.89 MiB/s [2024-11-28T07:26:41.355Z] 10544.78 IOPS, 41.19 MiB/s [2024-11-28T07:26:41.355Z] 10868.53 IOPS, 42.46 MiB/s [2024-11-28T07:26:41.355Z] 11096.05 IOPS, 43.34 MiB/s [2024-11-28T07:26:41.355Z] 11172.29 IOPS, 43.64 MiB/s [2024-11-28T07:26:41.355Z] 11240.50 IOPS, 43.91 MiB/s [2024-11-28T07:26:41.355Z] 11426.04 IOPS, 44.63 MiB/s [2024-11-28T07:26:41.355Z] 11655.00 IOPS, 45.53 MiB/s [2024-11-28T07:26:41.355Z] [2024-11-28 08:26:38.855744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:101152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.066 [2024-11-28 08:26:38.855780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:44.066 [2024-11-28 08:26:38.855812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.066 [2024-11-28 08:26:38.855818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:44.066 [2024-11-28 08:26:38.855830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:101216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.066 [2024-11-28 08:26:38.855835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:44.066 [2024-11-28 08:26:38.855846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:101248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.066 [2024-11-28 08:26:38.855851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:44.066 [2024-11-28 08:26:38.855861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.066 [2024-11-28 08:26:38.855867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:44.066 [2024-11-28 08:26:38.855877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.066 [2024-11-28 08:26:38.855883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:44.066 [2024-11-28 08:26:38.855893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:101344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.066 [2024-11-28 08:26:38.855898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:44.066 [2024-11-28 08:26:38.855909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:101376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.066 [2024-11-28 08:26:38.855914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:44.066 [2024-11-28 08:26:38.855924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.066 [2024-11-28 08:26:38.855929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:44.066 [2024-11-28 08:26:38.855940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.066 [2024-11-28 08:26:38.855945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:44.066 [2024-11-28 08:26:38.855960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:101240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.066 [2024-11-28 08:26:38.855965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:44.066 [2024-11-28 08:26:38.855975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:101272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.066 [2024-11-28 08:26:38.855981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:44.066 [2024-11-28 08:26:38.855991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.066 [2024-11-28 08:26:38.855997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:44.066 [2024-11-28 08:26:38.856007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.066 [2024-11-28 08:26:38.856013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:44.066 [2024-11-28 08:26:38.856023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.066 [2024-11-28 08:26:38.856028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:44.066 [2024-11-28 08:26:38.856039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.066 [2024-11-28 08:26:38.856045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.066 [2024-11-28 08:26:38.856056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.066 [2024-11-28 08:26:38.856061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:44.066 [2024-11-28 08:26:38.856072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.066 [2024-11-28 08:26:38.856077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:44.067 [2024-11-28 08:26:38.856088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.067 [2024-11-28 08:26:38.856093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:44.067 [2024-11-28 08:26:38.856103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.067 [2024-11-28 08:26:38.856108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:44.067 [2024-11-28 08:26:38.856119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.067 [2024-11-28 08:26:38.856124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:44.067 [2024-11-28 08:26:38.856135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:101624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.067 [2024-11-28 08:26:38.856141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:44.067 [2024-11-28 08:26:38.856151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:101408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.067 [2024-11-28 08:26:38.856162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:44.067 [2024-11-28 08:26:38.856173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.067 [2024-11-28 08:26:38.856178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:44.067 [2024-11-28 08:26:38.856188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.067 [2024-11-28 08:26:38.856193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:44.067 [2024-11-28 08:26:38.856203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.067 [2024-11-28 08:26:38.856209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:44.067 [2024-11-28 08:26:38.856219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.067 [2024-11-28 08:26:38.856225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:44.067 [2024-11-28 08:26:38.856236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.067 [2024-11-28 08:26:38.856241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:44.067 [2024-11-28 08:26:38.857200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.067 [2024-11-28 08:26:38.857214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:44.067 [2024-11-28 08:26:38.857226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.067 [2024-11-28 08:26:38.857232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:44.067 [2024-11-28 08:26:38.857242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.067 [2024-11-28 08:26:38.857248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:44.067 [2024-11-28 08:26:38.857407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:101648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.067 [2024-11-28 08:26:38.857417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:44.067 [2024-11-28 08:26:38.857429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.067 [2024-11-28 08:26:38.857434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:44.067 [2024-11-28 08:26:38.857445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:101680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.067 [2024-11-28 08:26:38.857450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:44.067 [2024-11-28 08:26:38.857460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:101696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.067 [2024-11-28 08:26:38.857468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:44.067 [2024-11-28 08:26:38.857478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:44.067 [2024-11-28 08:26:38.857484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:44.067 11806.84 IOPS, 46.12 MiB/s [2024-11-28T07:26:41.356Z] 11846.00 IOPS, 46.27 MiB/s [2024-11-28T07:26:41.356Z] Received shutdown signal, test time was about 26.926434 seconds 00:27:44.067 00:27:44.067 Latency(us) 00:27:44.067 [2024-11-28T07:26:41.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.067 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:44.067 Verification LBA range: start 0x0 length 0x4000 00:27:44.067 Nvme0n1 : 26.93 11882.61 46.42 0.00 0.00 10751.61 802.13 3019898.88 00:27:44.067 [2024-11-28T07:26:41.356Z] =================================================================================================================== 00:27:44.067 [2024-11-28T07:26:41.356Z] Total : 11882.61 46.42 0.00 0.00 10751.61 802.13 3019898.88 00:27:44.067 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:44.328 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:27:44.328 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:44.328 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:27:44.328 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:44.328 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:27:44.328 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:44.328 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:27:44.328 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:44.328 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:44.328 rmmod nvme_tcp 00:27:44.328 rmmod nvme_fabrics 00:27:44.328 rmmod nvme_keyring 00:27:44.328 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:44.328 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:27:44.328 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:27:44.328 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2099874 ']' 00:27:44.328 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2099874 00:27:44.328 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2099874 ']' 00:27:44.328 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2099874 00:27:44.328 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:44.328 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:44.328 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2099874 00:27:44.328 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:44.328 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:44.328 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2099874' 00:27:44.328 killing process with pid 2099874 00:27:44.328 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2099874 00:27:44.328 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2099874 00:27:44.589 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:44.589 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:44.589 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:44.589 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:27:44.589 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:27:44.589 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:44.589 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:27:44.589 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:44.589 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:44.589 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.589 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:44.589 08:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:46.501 08:26:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:46.501 00:27:46.501 real 0m41.546s 00:27:46.501 user 1m47.809s 00:27:46.501 sys 0m11.494s 00:27:46.501 08:26:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:46.501 08:26:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:46.501 ************************************ 00:27:46.501 END TEST nvmf_host_multipath_status 00:27:46.501 ************************************ 00:27:46.501 08:26:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:46.501 08:26:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:46.501 08:26:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:46.501 08:26:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.763 ************************************ 00:27:46.763 START TEST nvmf_discovery_remove_ifc 00:27:46.763 ************************************ 00:27:46.763 08:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:46.763 * Looking for test storage... 00:27:46.763 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:46.763 08:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:46.763 08:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:27:46.763 08:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:46.763 08:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:46.763 08:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:46.763 08:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:46.763 08:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:46.763 08:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:27:46.763 08:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:27:46.763 08:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:27:46.763 08:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:27:46.763 08:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:27:46.763 08:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:27:46.763 08:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:27:46.763 08:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:46.763 08:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:27:46.763 08:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:27:46.763 08:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:46.763 08:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:46.763 08:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:27:46.763 08:26:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:46.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.763 --rc genhtml_branch_coverage=1 00:27:46.763 --rc genhtml_function_coverage=1 00:27:46.763 --rc genhtml_legend=1 00:27:46.763 --rc geninfo_all_blocks=1 00:27:46.763 --rc geninfo_unexecuted_blocks=1 00:27:46.763 00:27:46.763 ' 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:46.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.763 --rc genhtml_branch_coverage=1 00:27:46.763 --rc genhtml_function_coverage=1 00:27:46.763 --rc genhtml_legend=1 00:27:46.763 --rc geninfo_all_blocks=1 00:27:46.763 --rc geninfo_unexecuted_blocks=1 00:27:46.763 00:27:46.763 ' 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:46.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.763 --rc genhtml_branch_coverage=1 00:27:46.763 --rc genhtml_function_coverage=1 00:27:46.763 --rc genhtml_legend=1 00:27:46.763 --rc geninfo_all_blocks=1 00:27:46.763 --rc geninfo_unexecuted_blocks=1 00:27:46.763 00:27:46.763 ' 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:46.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:46.763 --rc genhtml_branch_coverage=1 00:27:46.763 --rc genhtml_function_coverage=1 00:27:46.763 --rc genhtml_legend=1 00:27:46.763 --rc geninfo_all_blocks=1 00:27:46.763 --rc geninfo_unexecuted_blocks=1 00:27:46.763 00:27:46.763 ' 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:46.763 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:46.764 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:27:46.764 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:46.764 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:46.764 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:46.764 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:46.764 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:46.764 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:46.764 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:46.764 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:46.764 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:46.764 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:46.764 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:46.764 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:46.764 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:46.764 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:46.764 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:46.764 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:46.764 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:46.764 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:46.764 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:46.764 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:46.764 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:47.025 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:47.025 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:47.025 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:47.026 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:47.026 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:47.026 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:47.026 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:27:47.026 08:26:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:55.170 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:55.170 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:27:55.170 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:55.170 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:55.170 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:55.170 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:55.171 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:55.171 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:55.171 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:55.171 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:55.171 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:55.171 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:27:55.171 00:27:55.171 --- 10.0.0.2 ping statistics --- 00:27:55.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:55.171 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:55.171 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:55.171 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:27:55.171 00:27:55.171 --- 10.0.0.1 ping statistics --- 00:27:55.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:55.171 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:55.171 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:55.172 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:55.172 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:55.172 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:55.172 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:55.172 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:55.172 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:55.172 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:55.172 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:55.172 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:55.172 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2110404 00:27:55.172 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2110404 00:27:55.172 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:55.172 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2110404 ']' 00:27:55.172 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:55.172 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:55.172 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:55.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:55.172 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:55.172 08:26:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:55.172 [2024-11-28 08:26:51.640113] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:27:55.172 [2024-11-28 08:26:51.640212] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:55.172 [2024-11-28 08:26:51.739542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.172 [2024-11-28 08:26:51.789047] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:55.172 [2024-11-28 08:26:51.789098] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:55.172 [2024-11-28 08:26:51.789106] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:55.172 [2024-11-28 08:26:51.789113] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:55.172 [2024-11-28 08:26:51.789119] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:55.172 [2024-11-28 08:26:51.789898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:55.172 08:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:55.172 08:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:55.172 08:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:55.172 08:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:55.172 08:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:55.433 08:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:55.433 08:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:55.433 08:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.433 08:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:55.433 [2024-11-28 08:26:52.508815] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:55.433 [2024-11-28 08:26:52.517078] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:55.433 null0 00:27:55.433 [2024-11-28 08:26:52.549018] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:55.433 08:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.433 08:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2110581 00:27:55.433 08:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2110581 /tmp/host.sock 00:27:55.433 08:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:55.433 08:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2110581 ']' 00:27:55.433 08:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:27:55.433 08:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:55.433 08:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:55.433 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:55.433 08:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:55.433 08:26:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:55.433 [2024-11-28 08:26:52.626816] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:27:55.433 [2024-11-28 08:26:52.626882] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2110581 ] 00:27:55.694 [2024-11-28 08:26:52.724323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.694 [2024-11-28 08:26:52.778347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.266 08:26:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:56.266 08:26:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:56.266 08:26:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:56.266 08:26:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:56.266 08:26:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.266 08:26:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:56.266 08:26:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.266 08:26:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:56.266 08:26:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.266 08:26:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:56.527 08:26:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.527 08:26:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:56.527 08:26:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.527 08:26:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:57.470 [2024-11-28 08:26:54.618310] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:57.470 [2024-11-28 08:26:54.618331] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:57.470 [2024-11-28 08:26:54.618344] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:57.470 [2024-11-28 08:26:54.744754] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:57.730 [2024-11-28 08:26:54.806417] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:27:57.730 [2024-11-28 08:26:54.807550] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1307410:1 started. 00:27:57.730 [2024-11-28 08:26:54.809111] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:57.730 [2024-11-28 08:26:54.809156] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:57.730 [2024-11-28 08:26:54.809186] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:57.730 [2024-11-28 08:26:54.809199] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:57.730 [2024-11-28 08:26:54.809219] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:57.730 08:26:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.730 08:26:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:57.730 08:26:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:57.730 08:26:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:57.730 [2024-11-28 08:26:54.816736] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1307410 was disconnected and freed. delete nvme_qpair. 00:27:57.730 08:26:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:57.730 08:26:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.730 08:26:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:57.730 08:26:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:57.731 08:26:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:57.731 08:26:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.731 08:26:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:57.731 08:26:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:57.731 08:26:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:57.731 08:26:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:57.731 08:26:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:57.731 08:26:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:57.731 08:26:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.731 08:26:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:57.731 08:26:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:57.731 08:26:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:57.731 08:26:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:57.731 08:26:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.991 08:26:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:57.991 08:26:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:58.935 08:26:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:58.935 08:26:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:58.935 08:26:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:58.935 08:26:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.935 08:26:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:58.935 08:26:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:58.935 08:26:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:58.935 08:26:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.935 08:26:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:58.935 08:26:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:59.874 08:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:59.874 08:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:59.874 08:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:59.874 08:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.874 08:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:59.874 08:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:59.874 08:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:59.874 08:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.874 08:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:59.874 08:26:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:01.254 08:26:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:01.255 08:26:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:01.255 08:26:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:01.255 08:26:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.255 08:26:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:01.255 08:26:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:01.255 08:26:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:01.255 08:26:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.255 08:26:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:01.255 08:26:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:02.195 08:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:02.195 08:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:02.195 08:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:02.195 08:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.195 08:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:02.195 08:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:02.195 08:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:02.195 08:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.195 08:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:02.195 08:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:03.135 [2024-11-28 08:27:00.249921] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:28:03.135 [2024-11-28 08:27:00.249963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.135 [2024-11-28 08:27:00.249972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.135 [2024-11-28 08:27:00.249980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.135 [2024-11-28 08:27:00.249985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.135 [2024-11-28 08:27:00.249991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.135 [2024-11-28 08:27:00.250001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.135 [2024-11-28 08:27:00.250007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.135 [2024-11-28 08:27:00.250012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.135 [2024-11-28 08:27:00.250018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.135 [2024-11-28 08:27:00.250023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.135 [2024-11-28 08:27:00.250028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3c50 is same with the state(6) to be set 00:28:03.135 [2024-11-28 08:27:00.259942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e3c50 (9): Bad file descriptor 00:28:03.135 08:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:03.135 08:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:03.135 08:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:03.135 08:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.135 08:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:03.135 08:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:03.135 08:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:03.135 [2024-11-28 08:27:00.269975] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:03.135 [2024-11-28 08:27:00.269986] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:03.135 [2024-11-28 08:27:00.269989] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:03.135 [2024-11-28 08:27:00.269993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:03.135 [2024-11-28 08:27:00.270012] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:04.075 [2024-11-28 08:27:01.275241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:04.075 [2024-11-28 08:27:01.275347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e3c50 with addr=10.0.0.2, port=4420 00:28:04.075 [2024-11-28 08:27:01.275380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3c50 is same with the state(6) to be set 00:28:04.075 [2024-11-28 08:27:01.275445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e3c50 (9): Bad file descriptor 00:28:04.075 [2024-11-28 08:27:01.275599] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:28:04.076 [2024-11-28 08:27:01.275661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:04.076 [2024-11-28 08:27:01.275684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:04.076 [2024-11-28 08:27:01.275709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:04.076 [2024-11-28 08:27:01.275731] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:04.076 [2024-11-28 08:27:01.275747] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:04.076 [2024-11-28 08:27:01.275761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:04.076 [2024-11-28 08:27:01.275796] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:04.076 [2024-11-28 08:27:01.275812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:04.076 08:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.076 08:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:04.076 08:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:05.018 [2024-11-28 08:27:02.278221] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:05.018 [2024-11-28 08:27:02.278241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:05.018 [2024-11-28 08:27:02.278254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:05.018 [2024-11-28 08:27:02.278259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:05.018 [2024-11-28 08:27:02.278265] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:28:05.018 [2024-11-28 08:27:02.278271] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:05.018 [2024-11-28 08:27:02.278275] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:05.018 [2024-11-28 08:27:02.278278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:05.018 [2024-11-28 08:27:02.278299] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:28:05.018 [2024-11-28 08:27:02.278326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.018 [2024-11-28 08:27:02.278333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.018 [2024-11-28 08:27:02.278342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.018 [2024-11-28 08:27:02.278347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.018 [2024-11-28 08:27:02.278353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.018 [2024-11-28 08:27:02.278359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.018 [2024-11-28 08:27:02.278364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.018 [2024-11-28 08:27:02.278369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.018 [2024-11-28 08:27:02.278375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:05.018 [2024-11-28 08:27:02.278381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.018 [2024-11-28 08:27:02.278386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:28:05.018 [2024-11-28 08:27:02.278566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d3350 (9): Bad file descriptor 00:28:05.018 [2024-11-28 08:27:02.279575] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:28:05.018 [2024-11-28 08:27:02.279582] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:28:05.018 08:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:05.018 08:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:05.018 08:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:05.280 08:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.280 08:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:05.280 08:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:05.280 08:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:05.280 08:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.280 08:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:28:05.280 08:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:05.280 08:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:05.280 08:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:28:05.280 08:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:05.280 08:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:05.280 08:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:05.280 08:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.280 08:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:05.280 08:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:05.280 08:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:05.280 08:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.280 08:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:05.280 08:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:06.222 08:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:06.222 08:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:06.222 08:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:06.222 08:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.222 08:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:06.222 08:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:06.222 08:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:06.483 08:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.483 08:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:06.483 08:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:07.055 [2024-11-28 08:27:04.334084] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:07.055 [2024-11-28 08:27:04.334100] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:07.055 [2024-11-28 08:27:04.334110] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:07.316 [2024-11-28 08:27:04.462479] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:28:07.316 08:27:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:07.316 08:27:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:07.316 08:27:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:07.316 08:27:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.316 08:27:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:07.316 08:27:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:07.316 08:27:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:07.316 08:27:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.316 08:27:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:07.316 08:27:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:07.576 [2024-11-28 08:27:04.682632] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:28:07.576 [2024-11-28 08:27:04.683547] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x12bd020:1 started. 00:28:07.576 [2024-11-28 08:27:04.684465] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:07.576 [2024-11-28 08:27:04.684494] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:07.576 [2024-11-28 08:27:04.684510] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:07.576 [2024-11-28 08:27:04.684521] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:28:07.576 [2024-11-28 08:27:04.684527] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:07.576 [2024-11-28 08:27:04.690912] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x12bd020 was disconnected and freed. delete nvme_qpair. 00:28:08.516 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:08.516 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:08.516 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:08.516 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.516 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:08.516 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:08.516 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:08.516 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.516 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:28:08.516 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:28:08.516 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2110581 00:28:08.516 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2110581 ']' 00:28:08.516 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2110581 00:28:08.516 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:28:08.516 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:08.516 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2110581 00:28:08.516 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:08.516 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:08.516 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2110581' 00:28:08.516 killing process with pid 2110581 00:28:08.516 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2110581 00:28:08.516 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2110581 00:28:08.776 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:28:08.776 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:08.776 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:28:08.776 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:08.776 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:28:08.776 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:08.776 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:08.776 rmmod nvme_tcp 00:28:08.776 rmmod nvme_fabrics 00:28:08.776 rmmod nvme_keyring 00:28:08.776 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:08.776 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:28:08.776 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:28:08.776 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2110404 ']' 00:28:08.776 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2110404 00:28:08.776 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2110404 ']' 00:28:08.776 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2110404 00:28:08.776 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:28:08.776 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:08.776 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2110404 00:28:08.776 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:08.776 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:08.776 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2110404' 00:28:08.776 killing process with pid 2110404 00:28:08.776 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2110404 00:28:08.776 08:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2110404 00:28:08.776 08:27:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:08.776 08:27:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:08.777 08:27:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:08.777 08:27:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:28:08.777 08:27:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:28:08.777 08:27:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:08.777 08:27:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:28:09.038 08:27:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:09.038 08:27:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:09.038 08:27:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.038 08:27:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:09.038 08:27:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:10.951 08:27:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:10.951 00:28:10.951 real 0m24.333s 00:28:10.951 user 0m29.320s 00:28:10.951 sys 0m7.156s 00:28:10.952 08:27:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:10.952 08:27:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:10.952 ************************************ 00:28:10.952 END TEST nvmf_discovery_remove_ifc 00:28:10.952 ************************************ 00:28:10.952 08:27:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:10.952 08:27:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:10.952 08:27:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:10.952 08:27:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.952 ************************************ 00:28:10.952 START TEST nvmf_identify_kernel_target 00:28:10.952 ************************************ 00:28:10.952 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:11.213 * Looking for test storage... 00:28:11.213 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:11.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.213 --rc genhtml_branch_coverage=1 00:28:11.213 --rc genhtml_function_coverage=1 00:28:11.213 --rc genhtml_legend=1 00:28:11.213 --rc geninfo_all_blocks=1 00:28:11.213 --rc geninfo_unexecuted_blocks=1 00:28:11.213 00:28:11.213 ' 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:11.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.213 --rc genhtml_branch_coverage=1 00:28:11.213 --rc genhtml_function_coverage=1 00:28:11.213 --rc genhtml_legend=1 00:28:11.213 --rc geninfo_all_blocks=1 00:28:11.213 --rc geninfo_unexecuted_blocks=1 00:28:11.213 00:28:11.213 ' 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:11.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.213 --rc genhtml_branch_coverage=1 00:28:11.213 --rc genhtml_function_coverage=1 00:28:11.213 --rc genhtml_legend=1 00:28:11.213 --rc geninfo_all_blocks=1 00:28:11.213 --rc geninfo_unexecuted_blocks=1 00:28:11.213 00:28:11.213 ' 00:28:11.213 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:11.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:11.213 --rc genhtml_branch_coverage=1 00:28:11.213 --rc genhtml_function_coverage=1 00:28:11.214 --rc genhtml_legend=1 00:28:11.214 --rc geninfo_all_blocks=1 00:28:11.214 --rc geninfo_unexecuted_blocks=1 00:28:11.214 00:28:11.214 ' 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:11.214 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:28:11.214 08:27:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:19.442 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:19.442 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:19.442 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:19.442 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:19.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:19.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.547 ms 00:28:19.442 00:28:19.442 --- 10.0.0.2 ping statistics --- 00:28:19.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.442 rtt min/avg/max/mdev = 0.547/0.547/0.547/0.000 ms 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:19.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:19.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:28:19.442 00:28:19.442 --- 10.0.0.1 ping statistics --- 00:28:19.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.442 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:19.442 08:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:19.442 08:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:28:19.442 08:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:28:19.442 08:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:28:19.442 08:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:19.442 08:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:19.442 08:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.442 08:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.442 08:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:19.442 08:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.442 08:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:19.442 08:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:19.442 08:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:19.442 08:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:28:19.442 08:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:19.442 08:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:19.442 08:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:19.442 08:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:19.442 08:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:19.442 08:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:19.442 08:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:28:19.442 08:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:19.442 08:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:19.442 08:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:19.442 08:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:22.744 Waiting for block devices as requested 00:28:22.744 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:22.744 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:22.744 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:22.744 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:22.744 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:22.744 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:22.744 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:23.006 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:23.006 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:23.268 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:23.268 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:23.530 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:23.530 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:23.530 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:23.530 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:23.792 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:23.792 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:24.053 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:24.053 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:24.053 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:24.053 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:24.053 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:24.053 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:24.053 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:24.053 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:24.053 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:24.314 No valid GPT data, bailing 00:28:24.314 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:24.314 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:28:24.314 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:28:24.314 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:24.314 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:28:24.314 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:24.314 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:24.314 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:24.314 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:24.314 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:28:24.314 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:28:24.314 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:28:24.314 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:24.314 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:28:24.314 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:28:24.314 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:28:24.314 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:24.314 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:28:24.314 00:28:24.314 Discovery Log Number of Records 2, Generation counter 2 00:28:24.314 =====Discovery Log Entry 0====== 00:28:24.314 trtype: tcp 00:28:24.314 adrfam: ipv4 00:28:24.314 subtype: current discovery subsystem 00:28:24.314 treq: not specified, sq flow control disable supported 00:28:24.314 portid: 1 00:28:24.314 trsvcid: 4420 00:28:24.314 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:24.314 traddr: 10.0.0.1 00:28:24.314 eflags: none 00:28:24.314 sectype: none 00:28:24.314 =====Discovery Log Entry 1====== 00:28:24.314 trtype: tcp 00:28:24.315 adrfam: ipv4 00:28:24.315 subtype: nvme subsystem 00:28:24.315 treq: not specified, sq flow control disable supported 00:28:24.315 portid: 1 00:28:24.315 trsvcid: 4420 00:28:24.315 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:24.315 traddr: 10.0.0.1 00:28:24.315 eflags: none 00:28:24.315 sectype: none 00:28:24.315 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:28:24.315 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:28:24.315 ===================================================== 00:28:24.315 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:24.315 ===================================================== 00:28:24.315 Controller Capabilities/Features 00:28:24.315 ================================ 00:28:24.315 Vendor ID: 0000 00:28:24.315 Subsystem Vendor ID: 0000 00:28:24.315 Serial Number: a0d284a5ef7ec94d09ae 00:28:24.315 Model Number: Linux 00:28:24.315 Firmware Version: 6.8.9-20 00:28:24.315 Recommended Arb Burst: 0 00:28:24.315 IEEE OUI Identifier: 00 00 00 00:28:24.315 Multi-path I/O 00:28:24.315 May have multiple subsystem ports: No 00:28:24.315 May have multiple controllers: No 00:28:24.315 Associated with SR-IOV VF: No 00:28:24.315 Max Data Transfer Size: Unlimited 00:28:24.315 Max Number of Namespaces: 0 00:28:24.315 Max Number of I/O Queues: 1024 00:28:24.315 NVMe Specification Version (VS): 1.3 00:28:24.315 NVMe Specification Version (Identify): 1.3 00:28:24.315 Maximum Queue Entries: 1024 00:28:24.315 Contiguous Queues Required: No 00:28:24.315 Arbitration Mechanisms Supported 00:28:24.315 Weighted Round Robin: Not Supported 00:28:24.315 Vendor Specific: Not Supported 00:28:24.315 Reset Timeout: 7500 ms 00:28:24.315 Doorbell Stride: 4 bytes 00:28:24.315 NVM Subsystem Reset: Not Supported 00:28:24.315 Command Sets Supported 00:28:24.315 NVM Command Set: Supported 00:28:24.315 Boot Partition: Not Supported 00:28:24.315 Memory Page Size Minimum: 4096 bytes 00:28:24.315 Memory Page Size Maximum: 4096 bytes 00:28:24.315 Persistent Memory Region: Not Supported 00:28:24.315 Optional Asynchronous Events Supported 00:28:24.315 Namespace Attribute Notices: Not Supported 00:28:24.315 Firmware Activation Notices: Not Supported 00:28:24.315 ANA Change Notices: Not Supported 00:28:24.315 PLE Aggregate Log Change Notices: Not Supported 00:28:24.315 LBA Status Info Alert Notices: Not Supported 00:28:24.315 EGE Aggregate Log Change Notices: Not Supported 00:28:24.315 Normal NVM Subsystem Shutdown event: Not Supported 00:28:24.315 Zone Descriptor Change Notices: Not Supported 00:28:24.315 Discovery Log Change Notices: Supported 00:28:24.315 Controller Attributes 00:28:24.315 128-bit Host Identifier: Not Supported 00:28:24.315 Non-Operational Permissive Mode: Not Supported 00:28:24.315 NVM Sets: Not Supported 00:28:24.315 Read Recovery Levels: Not Supported 00:28:24.315 Endurance Groups: Not Supported 00:28:24.315 Predictable Latency Mode: Not Supported 00:28:24.315 Traffic Based Keep ALive: Not Supported 00:28:24.315 Namespace Granularity: Not Supported 00:28:24.315 SQ Associations: Not Supported 00:28:24.315 UUID List: Not Supported 00:28:24.315 Multi-Domain Subsystem: Not Supported 00:28:24.315 Fixed Capacity Management: Not Supported 00:28:24.315 Variable Capacity Management: Not Supported 00:28:24.315 Delete Endurance Group: Not Supported 00:28:24.315 Delete NVM Set: Not Supported 00:28:24.315 Extended LBA Formats Supported: Not Supported 00:28:24.315 Flexible Data Placement Supported: Not Supported 00:28:24.315 00:28:24.315 Controller Memory Buffer Support 00:28:24.315 ================================ 00:28:24.315 Supported: No 00:28:24.315 00:28:24.315 Persistent Memory Region Support 00:28:24.315 ================================ 00:28:24.315 Supported: No 00:28:24.315 00:28:24.315 Admin Command Set Attributes 00:28:24.315 ============================ 00:28:24.315 Security Send/Receive: Not Supported 00:28:24.315 Format NVM: Not Supported 00:28:24.315 Firmware Activate/Download: Not Supported 00:28:24.315 Namespace Management: Not Supported 00:28:24.315 Device Self-Test: Not Supported 00:28:24.315 Directives: Not Supported 00:28:24.315 NVMe-MI: Not Supported 00:28:24.315 Virtualization Management: Not Supported 00:28:24.315 Doorbell Buffer Config: Not Supported 00:28:24.315 Get LBA Status Capability: Not Supported 00:28:24.315 Command & Feature Lockdown Capability: Not Supported 00:28:24.315 Abort Command Limit: 1 00:28:24.315 Async Event Request Limit: 1 00:28:24.315 Number of Firmware Slots: N/A 00:28:24.315 Firmware Slot 1 Read-Only: N/A 00:28:24.315 Firmware Activation Without Reset: N/A 00:28:24.315 Multiple Update Detection Support: N/A 00:28:24.315 Firmware Update Granularity: No Information Provided 00:28:24.315 Per-Namespace SMART Log: No 00:28:24.315 Asymmetric Namespace Access Log Page: Not Supported 00:28:24.315 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:24.315 Command Effects Log Page: Not Supported 00:28:24.315 Get Log Page Extended Data: Supported 00:28:24.315 Telemetry Log Pages: Not Supported 00:28:24.315 Persistent Event Log Pages: Not Supported 00:28:24.315 Supported Log Pages Log Page: May Support 00:28:24.315 Commands Supported & Effects Log Page: Not Supported 00:28:24.315 Feature Identifiers & Effects Log Page:May Support 00:28:24.315 NVMe-MI Commands & Effects Log Page: May Support 00:28:24.315 Data Area 4 for Telemetry Log: Not Supported 00:28:24.315 Error Log Page Entries Supported: 1 00:28:24.315 Keep Alive: Not Supported 00:28:24.315 00:28:24.315 NVM Command Set Attributes 00:28:24.315 ========================== 00:28:24.315 Submission Queue Entry Size 00:28:24.315 Max: 1 00:28:24.315 Min: 1 00:28:24.315 Completion Queue Entry Size 00:28:24.315 Max: 1 00:28:24.315 Min: 1 00:28:24.315 Number of Namespaces: 0 00:28:24.315 Compare Command: Not Supported 00:28:24.315 Write Uncorrectable Command: Not Supported 00:28:24.315 Dataset Management Command: Not Supported 00:28:24.315 Write Zeroes Command: Not Supported 00:28:24.315 Set Features Save Field: Not Supported 00:28:24.315 Reservations: Not Supported 00:28:24.315 Timestamp: Not Supported 00:28:24.315 Copy: Not Supported 00:28:24.315 Volatile Write Cache: Not Present 00:28:24.315 Atomic Write Unit (Normal): 1 00:28:24.315 Atomic Write Unit (PFail): 1 00:28:24.315 Atomic Compare & Write Unit: 1 00:28:24.315 Fused Compare & Write: Not Supported 00:28:24.315 Scatter-Gather List 00:28:24.315 SGL Command Set: Supported 00:28:24.315 SGL Keyed: Not Supported 00:28:24.315 SGL Bit Bucket Descriptor: Not Supported 00:28:24.315 SGL Metadata Pointer: Not Supported 00:28:24.315 Oversized SGL: Not Supported 00:28:24.315 SGL Metadata Address: Not Supported 00:28:24.315 SGL Offset: Supported 00:28:24.315 Transport SGL Data Block: Not Supported 00:28:24.315 Replay Protected Memory Block: Not Supported 00:28:24.315 00:28:24.315 Firmware Slot Information 00:28:24.315 ========================= 00:28:24.315 Active slot: 0 00:28:24.315 00:28:24.315 00:28:24.315 Error Log 00:28:24.315 ========= 00:28:24.315 00:28:24.315 Active Namespaces 00:28:24.315 ================= 00:28:24.315 Discovery Log Page 00:28:24.315 ================== 00:28:24.315 Generation Counter: 2 00:28:24.315 Number of Records: 2 00:28:24.315 Record Format: 0 00:28:24.315 00:28:24.315 Discovery Log Entry 0 00:28:24.315 ---------------------- 00:28:24.315 Transport Type: 3 (TCP) 00:28:24.315 Address Family: 1 (IPv4) 00:28:24.315 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:24.315 Entry Flags: 00:28:24.315 Duplicate Returned Information: 0 00:28:24.315 Explicit Persistent Connection Support for Discovery: 0 00:28:24.315 Transport Requirements: 00:28:24.315 Secure Channel: Not Specified 00:28:24.315 Port ID: 1 (0x0001) 00:28:24.315 Controller ID: 65535 (0xffff) 00:28:24.315 Admin Max SQ Size: 32 00:28:24.315 Transport Service Identifier: 4420 00:28:24.315 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:24.315 Transport Address: 10.0.0.1 00:28:24.315 Discovery Log Entry 1 00:28:24.315 ---------------------- 00:28:24.315 Transport Type: 3 (TCP) 00:28:24.315 Address Family: 1 (IPv4) 00:28:24.315 Subsystem Type: 2 (NVM Subsystem) 00:28:24.315 Entry Flags: 00:28:24.315 Duplicate Returned Information: 0 00:28:24.315 Explicit Persistent Connection Support for Discovery: 0 00:28:24.315 Transport Requirements: 00:28:24.315 Secure Channel: Not Specified 00:28:24.315 Port ID: 1 (0x0001) 00:28:24.315 Controller ID: 65535 (0xffff) 00:28:24.315 Admin Max SQ Size: 32 00:28:24.315 Transport Service Identifier: 4420 00:28:24.315 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:28:24.315 Transport Address: 10.0.0.1 00:28:24.315 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:24.576 get_feature(0x01) failed 00:28:24.576 get_feature(0x02) failed 00:28:24.576 get_feature(0x04) failed 00:28:24.576 ===================================================== 00:28:24.576 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:24.576 ===================================================== 00:28:24.576 Controller Capabilities/Features 00:28:24.576 ================================ 00:28:24.576 Vendor ID: 0000 00:28:24.576 Subsystem Vendor ID: 0000 00:28:24.576 Serial Number: 5ef32abd65d2a75c4283 00:28:24.576 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:28:24.576 Firmware Version: 6.8.9-20 00:28:24.576 Recommended Arb Burst: 6 00:28:24.576 IEEE OUI Identifier: 00 00 00 00:28:24.576 Multi-path I/O 00:28:24.576 May have multiple subsystem ports: Yes 00:28:24.577 May have multiple controllers: Yes 00:28:24.577 Associated with SR-IOV VF: No 00:28:24.577 Max Data Transfer Size: Unlimited 00:28:24.577 Max Number of Namespaces: 1024 00:28:24.577 Max Number of I/O Queues: 128 00:28:24.577 NVMe Specification Version (VS): 1.3 00:28:24.577 NVMe Specification Version (Identify): 1.3 00:28:24.577 Maximum Queue Entries: 1024 00:28:24.577 Contiguous Queues Required: No 00:28:24.577 Arbitration Mechanisms Supported 00:28:24.577 Weighted Round Robin: Not Supported 00:28:24.577 Vendor Specific: Not Supported 00:28:24.577 Reset Timeout: 7500 ms 00:28:24.577 Doorbell Stride: 4 bytes 00:28:24.577 NVM Subsystem Reset: Not Supported 00:28:24.577 Command Sets Supported 00:28:24.577 NVM Command Set: Supported 00:28:24.577 Boot Partition: Not Supported 00:28:24.577 Memory Page Size Minimum: 4096 bytes 00:28:24.577 Memory Page Size Maximum: 4096 bytes 00:28:24.577 Persistent Memory Region: Not Supported 00:28:24.577 Optional Asynchronous Events Supported 00:28:24.577 Namespace Attribute Notices: Supported 00:28:24.577 Firmware Activation Notices: Not Supported 00:28:24.577 ANA Change Notices: Supported 00:28:24.577 PLE Aggregate Log Change Notices: Not Supported 00:28:24.577 LBA Status Info Alert Notices: Not Supported 00:28:24.577 EGE Aggregate Log Change Notices: Not Supported 00:28:24.577 Normal NVM Subsystem Shutdown event: Not Supported 00:28:24.577 Zone Descriptor Change Notices: Not Supported 00:28:24.577 Discovery Log Change Notices: Not Supported 00:28:24.577 Controller Attributes 00:28:24.577 128-bit Host Identifier: Supported 00:28:24.577 Non-Operational Permissive Mode: Not Supported 00:28:24.577 NVM Sets: Not Supported 00:28:24.577 Read Recovery Levels: Not Supported 00:28:24.577 Endurance Groups: Not Supported 00:28:24.577 Predictable Latency Mode: Not Supported 00:28:24.577 Traffic Based Keep ALive: Supported 00:28:24.577 Namespace Granularity: Not Supported 00:28:24.577 SQ Associations: Not Supported 00:28:24.577 UUID List: Not Supported 00:28:24.577 Multi-Domain Subsystem: Not Supported 00:28:24.577 Fixed Capacity Management: Not Supported 00:28:24.577 Variable Capacity Management: Not Supported 00:28:24.577 Delete Endurance Group: Not Supported 00:28:24.577 Delete NVM Set: Not Supported 00:28:24.577 Extended LBA Formats Supported: Not Supported 00:28:24.577 Flexible Data Placement Supported: Not Supported 00:28:24.577 00:28:24.577 Controller Memory Buffer Support 00:28:24.577 ================================ 00:28:24.577 Supported: No 00:28:24.577 00:28:24.577 Persistent Memory Region Support 00:28:24.577 ================================ 00:28:24.577 Supported: No 00:28:24.577 00:28:24.577 Admin Command Set Attributes 00:28:24.577 ============================ 00:28:24.577 Security Send/Receive: Not Supported 00:28:24.577 Format NVM: Not Supported 00:28:24.577 Firmware Activate/Download: Not Supported 00:28:24.577 Namespace Management: Not Supported 00:28:24.577 Device Self-Test: Not Supported 00:28:24.577 Directives: Not Supported 00:28:24.577 NVMe-MI: Not Supported 00:28:24.577 Virtualization Management: Not Supported 00:28:24.577 Doorbell Buffer Config: Not Supported 00:28:24.577 Get LBA Status Capability: Not Supported 00:28:24.577 Command & Feature Lockdown Capability: Not Supported 00:28:24.577 Abort Command Limit: 4 00:28:24.577 Async Event Request Limit: 4 00:28:24.577 Number of Firmware Slots: N/A 00:28:24.577 Firmware Slot 1 Read-Only: N/A 00:28:24.577 Firmware Activation Without Reset: N/A 00:28:24.577 Multiple Update Detection Support: N/A 00:28:24.577 Firmware Update Granularity: No Information Provided 00:28:24.577 Per-Namespace SMART Log: Yes 00:28:24.577 Asymmetric Namespace Access Log Page: Supported 00:28:24.577 ANA Transition Time : 10 sec 00:28:24.577 00:28:24.577 Asymmetric Namespace Access Capabilities 00:28:24.577 ANA Optimized State : Supported 00:28:24.577 ANA Non-Optimized State : Supported 00:28:24.577 ANA Inaccessible State : Supported 00:28:24.577 ANA Persistent Loss State : Supported 00:28:24.577 ANA Change State : Supported 00:28:24.577 ANAGRPID is not changed : No 00:28:24.577 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:28:24.577 00:28:24.577 ANA Group Identifier Maximum : 128 00:28:24.577 Number of ANA Group Identifiers : 128 00:28:24.577 Max Number of Allowed Namespaces : 1024 00:28:24.577 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:28:24.577 Command Effects Log Page: Supported 00:28:24.577 Get Log Page Extended Data: Supported 00:28:24.577 Telemetry Log Pages: Not Supported 00:28:24.577 Persistent Event Log Pages: Not Supported 00:28:24.577 Supported Log Pages Log Page: May Support 00:28:24.577 Commands Supported & Effects Log Page: Not Supported 00:28:24.577 Feature Identifiers & Effects Log Page:May Support 00:28:24.577 NVMe-MI Commands & Effects Log Page: May Support 00:28:24.577 Data Area 4 for Telemetry Log: Not Supported 00:28:24.577 Error Log Page Entries Supported: 128 00:28:24.577 Keep Alive: Supported 00:28:24.577 Keep Alive Granularity: 1000 ms 00:28:24.577 00:28:24.577 NVM Command Set Attributes 00:28:24.577 ========================== 00:28:24.577 Submission Queue Entry Size 00:28:24.577 Max: 64 00:28:24.577 Min: 64 00:28:24.577 Completion Queue Entry Size 00:28:24.577 Max: 16 00:28:24.577 Min: 16 00:28:24.577 Number of Namespaces: 1024 00:28:24.577 Compare Command: Not Supported 00:28:24.577 Write Uncorrectable Command: Not Supported 00:28:24.577 Dataset Management Command: Supported 00:28:24.577 Write Zeroes Command: Supported 00:28:24.577 Set Features Save Field: Not Supported 00:28:24.577 Reservations: Not Supported 00:28:24.577 Timestamp: Not Supported 00:28:24.577 Copy: Not Supported 00:28:24.577 Volatile Write Cache: Present 00:28:24.577 Atomic Write Unit (Normal): 1 00:28:24.577 Atomic Write Unit (PFail): 1 00:28:24.577 Atomic Compare & Write Unit: 1 00:28:24.577 Fused Compare & Write: Not Supported 00:28:24.577 Scatter-Gather List 00:28:24.577 SGL Command Set: Supported 00:28:24.577 SGL Keyed: Not Supported 00:28:24.577 SGL Bit Bucket Descriptor: Not Supported 00:28:24.577 SGL Metadata Pointer: Not Supported 00:28:24.578 Oversized SGL: Not Supported 00:28:24.578 SGL Metadata Address: Not Supported 00:28:24.578 SGL Offset: Supported 00:28:24.578 Transport SGL Data Block: Not Supported 00:28:24.578 Replay Protected Memory Block: Not Supported 00:28:24.578 00:28:24.578 Firmware Slot Information 00:28:24.578 ========================= 00:28:24.578 Active slot: 0 00:28:24.578 00:28:24.578 Asymmetric Namespace Access 00:28:24.578 =========================== 00:28:24.578 Change Count : 0 00:28:24.578 Number of ANA Group Descriptors : 1 00:28:24.578 ANA Group Descriptor : 0 00:28:24.578 ANA Group ID : 1 00:28:24.578 Number of NSID Values : 1 00:28:24.578 Change Count : 0 00:28:24.578 ANA State : 1 00:28:24.578 Namespace Identifier : 1 00:28:24.578 00:28:24.578 Commands Supported and Effects 00:28:24.578 ============================== 00:28:24.578 Admin Commands 00:28:24.578 -------------- 00:28:24.578 Get Log Page (02h): Supported 00:28:24.578 Identify (06h): Supported 00:28:24.578 Abort (08h): Supported 00:28:24.578 Set Features (09h): Supported 00:28:24.578 Get Features (0Ah): Supported 00:28:24.578 Asynchronous Event Request (0Ch): Supported 00:28:24.578 Keep Alive (18h): Supported 00:28:24.578 I/O Commands 00:28:24.578 ------------ 00:28:24.578 Flush (00h): Supported 00:28:24.578 Write (01h): Supported LBA-Change 00:28:24.578 Read (02h): Supported 00:28:24.578 Write Zeroes (08h): Supported LBA-Change 00:28:24.578 Dataset Management (09h): Supported 00:28:24.578 00:28:24.578 Error Log 00:28:24.578 ========= 00:28:24.578 Entry: 0 00:28:24.578 Error Count: 0x3 00:28:24.578 Submission Queue Id: 0x0 00:28:24.578 Command Id: 0x5 00:28:24.578 Phase Bit: 0 00:28:24.578 Status Code: 0x2 00:28:24.578 Status Code Type: 0x0 00:28:24.578 Do Not Retry: 1 00:28:24.578 Error Location: 0x28 00:28:24.578 LBA: 0x0 00:28:24.578 Namespace: 0x0 00:28:24.578 Vendor Log Page: 0x0 00:28:24.578 ----------- 00:28:24.578 Entry: 1 00:28:24.578 Error Count: 0x2 00:28:24.578 Submission Queue Id: 0x0 00:28:24.578 Command Id: 0x5 00:28:24.578 Phase Bit: 0 00:28:24.578 Status Code: 0x2 00:28:24.578 Status Code Type: 0x0 00:28:24.578 Do Not Retry: 1 00:28:24.578 Error Location: 0x28 00:28:24.578 LBA: 0x0 00:28:24.578 Namespace: 0x0 00:28:24.578 Vendor Log Page: 0x0 00:28:24.578 ----------- 00:28:24.578 Entry: 2 00:28:24.578 Error Count: 0x1 00:28:24.578 Submission Queue Id: 0x0 00:28:24.578 Command Id: 0x4 00:28:24.578 Phase Bit: 0 00:28:24.578 Status Code: 0x2 00:28:24.578 Status Code Type: 0x0 00:28:24.578 Do Not Retry: 1 00:28:24.578 Error Location: 0x28 00:28:24.578 LBA: 0x0 00:28:24.578 Namespace: 0x0 00:28:24.578 Vendor Log Page: 0x0 00:28:24.578 00:28:24.578 Number of Queues 00:28:24.578 ================ 00:28:24.578 Number of I/O Submission Queues: 128 00:28:24.578 Number of I/O Completion Queues: 128 00:28:24.578 00:28:24.578 ZNS Specific Controller Data 00:28:24.578 ============================ 00:28:24.578 Zone Append Size Limit: 0 00:28:24.578 00:28:24.578 00:28:24.578 Active Namespaces 00:28:24.578 ================= 00:28:24.578 get_feature(0x05) failed 00:28:24.578 Namespace ID:1 00:28:24.578 Command Set Identifier: NVM (00h) 00:28:24.578 Deallocate: Supported 00:28:24.578 Deallocated/Unwritten Error: Not Supported 00:28:24.578 Deallocated Read Value: Unknown 00:28:24.578 Deallocate in Write Zeroes: Not Supported 00:28:24.578 Deallocated Guard Field: 0xFFFF 00:28:24.578 Flush: Supported 00:28:24.578 Reservation: Not Supported 00:28:24.578 Namespace Sharing Capabilities: Multiple Controllers 00:28:24.578 Size (in LBAs): 3750748848 (1788GiB) 00:28:24.578 Capacity (in LBAs): 3750748848 (1788GiB) 00:28:24.578 Utilization (in LBAs): 3750748848 (1788GiB) 00:28:24.578 UUID: f214f1e6-c086-45c4-a878-9f478c7e52d7 00:28:24.578 Thin Provisioning: Not Supported 00:28:24.578 Per-NS Atomic Units: Yes 00:28:24.578 Atomic Write Unit (Normal): 8 00:28:24.578 Atomic Write Unit (PFail): 8 00:28:24.578 Preferred Write Granularity: 8 00:28:24.578 Atomic Compare & Write Unit: 8 00:28:24.578 Atomic Boundary Size (Normal): 0 00:28:24.578 Atomic Boundary Size (PFail): 0 00:28:24.578 Atomic Boundary Offset: 0 00:28:24.578 NGUID/EUI64 Never Reused: No 00:28:24.578 ANA group ID: 1 00:28:24.578 Namespace Write Protected: No 00:28:24.578 Number of LBA Formats: 1 00:28:24.578 Current LBA Format: LBA Format #00 00:28:24.578 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:24.578 00:28:24.578 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:28:24.578 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:24.578 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:28:24.578 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:24.578 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:28:24.578 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:24.578 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:24.578 rmmod nvme_tcp 00:28:24.578 rmmod nvme_fabrics 00:28:24.578 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:24.578 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:28:24.578 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:28:24.578 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:28:24.578 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:24.578 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:24.578 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:24.578 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:28:24.578 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:28:24.578 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:24.578 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:28:24.579 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:24.579 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:24.579 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.579 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:24.579 08:27:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.123 08:27:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:27.123 08:27:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:28:27.123 08:27:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:27.123 08:27:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:28:27.123 08:27:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:27.123 08:27:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:27.123 08:27:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:27.123 08:27:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:27.123 08:27:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:27.123 08:27:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:27.123 08:27:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:30.432 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:30.432 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:30.432 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:30.432 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:30.432 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:30.433 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:30.433 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:30.433 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:30.433 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:30.433 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:30.433 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:30.433 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:30.433 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:30.433 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:30.433 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:30.433 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:30.433 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:30.694 00:28:30.694 real 0m19.750s 00:28:30.694 user 0m5.401s 00:28:30.694 sys 0m11.331s 00:28:30.694 08:27:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:30.955 08:27:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:30.955 ************************************ 00:28:30.955 END TEST nvmf_identify_kernel_target 00:28:30.955 ************************************ 00:28:30.955 08:27:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:30.955 08:27:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:30.955 08:27:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:30.955 08:27:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.955 ************************************ 00:28:30.955 START TEST nvmf_auth_host 00:28:30.955 ************************************ 00:28:30.955 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:30.955 * Looking for test storage... 00:28:30.955 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:30.955 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:30.955 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:28:30.955 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:31.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:31.218 --rc genhtml_branch_coverage=1 00:28:31.218 --rc genhtml_function_coverage=1 00:28:31.218 --rc genhtml_legend=1 00:28:31.218 --rc geninfo_all_blocks=1 00:28:31.218 --rc geninfo_unexecuted_blocks=1 00:28:31.218 00:28:31.218 ' 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:31.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:31.218 --rc genhtml_branch_coverage=1 00:28:31.218 --rc genhtml_function_coverage=1 00:28:31.218 --rc genhtml_legend=1 00:28:31.218 --rc geninfo_all_blocks=1 00:28:31.218 --rc geninfo_unexecuted_blocks=1 00:28:31.218 00:28:31.218 ' 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:31.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:31.218 --rc genhtml_branch_coverage=1 00:28:31.218 --rc genhtml_function_coverage=1 00:28:31.218 --rc genhtml_legend=1 00:28:31.218 --rc geninfo_all_blocks=1 00:28:31.218 --rc geninfo_unexecuted_blocks=1 00:28:31.218 00:28:31.218 ' 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:31.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:31.218 --rc genhtml_branch_coverage=1 00:28:31.218 --rc genhtml_function_coverage=1 00:28:31.218 --rc genhtml_legend=1 00:28:31.218 --rc geninfo_all_blocks=1 00:28:31.218 --rc geninfo_unexecuted_blocks=1 00:28:31.218 00:28:31.218 ' 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:28:31.218 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.219 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:28:31.219 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:31.219 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:31.219 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:31.219 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:31.219 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:31.219 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:31.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:31.219 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:31.219 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:31.219 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:31.219 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:28:31.219 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:28:31.219 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:28:31.219 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:28:31.219 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:31.219 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:31.219 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:28:31.219 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:28:31.219 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:28:31.219 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:31.219 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:31.219 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:31.219 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:31.219 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:31.219 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.219 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:31.219 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.219 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:31.219 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:31.219 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:28:31.219 08:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:39.366 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:39.366 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:39.366 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:39.366 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:39.366 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:39.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:39.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:28:39.367 00:28:39.367 --- 10.0.0.2 ping statistics --- 00:28:39.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.367 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:39.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:39.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:28:39.367 00:28:39.367 --- 10.0.0.1 ping statistics --- 00:28:39.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.367 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2125554 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2125554 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2125554 ']' 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:39.367 08:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=baeb34c4bc3093f628a5a190ad699282 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Z13 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key baeb34c4bc3093f628a5a190ad699282 0 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 baeb34c4bc3093f628a5a190ad699282 0 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=baeb34c4bc3093f628a5a190ad699282 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Z13 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Z13 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Z13 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ec99b95762ccf5eabc46bd70e9a2d0605a99480c1f29c3e7ee91bec59a1a8ef1 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.sEk 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ec99b95762ccf5eabc46bd70e9a2d0605a99480c1f29c3e7ee91bec59a1a8ef1 3 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ec99b95762ccf5eabc46bd70e9a2d0605a99480c1f29c3e7ee91bec59a1a8ef1 3 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ec99b95762ccf5eabc46bd70e9a2d0605a99480c1f29c3e7ee91bec59a1a8ef1 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.sEk 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.sEk 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.sEk 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=036328597fffefbc1935278b6c06d86489cf5817ceef8172 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.ES6 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 036328597fffefbc1935278b6c06d86489cf5817ceef8172 0 00:28:39.630 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 036328597fffefbc1935278b6c06d86489cf5817ceef8172 0 00:28:39.631 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:39.631 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:39.631 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=036328597fffefbc1935278b6c06d86489cf5817ceef8172 00:28:39.631 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:39.631 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:39.893 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.ES6 00:28:39.893 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.ES6 00:28:39.893 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.ES6 00:28:39.893 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:28:39.893 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:39.893 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:39.893 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:39.893 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:39.893 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:39.893 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:39.893 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bbe0c8dd178919cd3fac28ce4c50f23755b7f9b5a127691f 00:28:39.893 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:39.893 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.toD 00:28:39.893 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bbe0c8dd178919cd3fac28ce4c50f23755b7f9b5a127691f 2 00:28:39.893 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bbe0c8dd178919cd3fac28ce4c50f23755b7f9b5a127691f 2 00:28:39.893 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:39.893 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:39.893 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bbe0c8dd178919cd3fac28ce4c50f23755b7f9b5a127691f 00:28:39.893 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:39.893 08:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.toD 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.toD 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.toD 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=405f072ddb5821d569a643f08539bfe7 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.y71 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 405f072ddb5821d569a643f08539bfe7 1 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 405f072ddb5821d569a643f08539bfe7 1 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=405f072ddb5821d569a643f08539bfe7 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.y71 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.y71 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.y71 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9b40434d6e43ae22bc84efa03c173c36 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.TxJ 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9b40434d6e43ae22bc84efa03c173c36 1 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9b40434d6e43ae22bc84efa03c173c36 1 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9b40434d6e43ae22bc84efa03c173c36 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.TxJ 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.TxJ 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.TxJ 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=eca58bd216aae911ddc5a76406d3a5e087dc96245dfed632 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.VN1 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key eca58bd216aae911ddc5a76406d3a5e087dc96245dfed632 2 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 eca58bd216aae911ddc5a76406d3a5e087dc96245dfed632 2 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=eca58bd216aae911ddc5a76406d3a5e087dc96245dfed632 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:39.893 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.VN1 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.VN1 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.VN1 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2bae0b941a750a968e4e5c895f826559 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Iat 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2bae0b941a750a968e4e5c895f826559 0 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2bae0b941a750a968e4e5c895f826559 0 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2bae0b941a750a968e4e5c895f826559 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Iat 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Iat 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Iat 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f4f1c2f30ab6a87d9c0a7f17f86a9724cd70a55549289949b86334ad8b34036f 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Anu 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f4f1c2f30ab6a87d9c0a7f17f86a9724cd70a55549289949b86334ad8b34036f 3 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f4f1c2f30ab6a87d9c0a7f17f86a9724cd70a55549289949b86334ad8b34036f 3 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f4f1c2f30ab6a87d9c0a7f17f86a9724cd70a55549289949b86334ad8b34036f 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Anu 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Anu 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Anu 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2125554 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2125554 ']' 00:28:40.155 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:40.156 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:40.156 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:40.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:40.156 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:40.156 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Z13 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.sEk ]] 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sEk 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.ES6 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.toD ]] 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.toD 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.y71 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.TxJ ]] 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TxJ 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.VN1 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Iat ]] 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Iat 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Anu 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:40.417 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:40.418 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:40.418 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:40.418 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:40.418 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:28:40.418 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:40.418 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:40.680 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:40.680 08:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:43.985 Waiting for block devices as requested 00:28:43.985 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:43.985 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:43.985 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:44.245 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:44.245 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:44.245 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:44.506 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:44.506 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:44.506 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:44.768 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:44.768 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:45.029 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:45.029 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:45.029 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:45.029 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:45.289 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:45.289 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:46.233 No valid GPT data, bailing 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:28:46.233 00:28:46.233 Discovery Log Number of Records 2, Generation counter 2 00:28:46.233 =====Discovery Log Entry 0====== 00:28:46.233 trtype: tcp 00:28:46.233 adrfam: ipv4 00:28:46.233 subtype: current discovery subsystem 00:28:46.233 treq: not specified, sq flow control disable supported 00:28:46.233 portid: 1 00:28:46.233 trsvcid: 4420 00:28:46.233 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:46.233 traddr: 10.0.0.1 00:28:46.233 eflags: none 00:28:46.233 sectype: none 00:28:46.233 =====Discovery Log Entry 1====== 00:28:46.233 trtype: tcp 00:28:46.233 adrfam: ipv4 00:28:46.233 subtype: nvme subsystem 00:28:46.233 treq: not specified, sq flow control disable supported 00:28:46.233 portid: 1 00:28:46.233 trsvcid: 4420 00:28:46.233 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:46.233 traddr: 10.0.0.1 00:28:46.233 eflags: none 00:28:46.233 sectype: none 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDM2MzI4NTk3ZmZmZWZiYzE5MzUyNzhiNmMwNmQ4NjQ4OWNmNTgxN2NlZWY4MTcyjQfTqA==: 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDM2MzI4NTk3ZmZmZWZiYzE5MzUyNzhiNmMwNmQ4NjQ4OWNmNTgxN2NlZWY4MTcyjQfTqA==: 00:28:46.233 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: ]] 00:28:46.234 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: 00:28:46.234 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:46.234 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:46.234 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:46.234 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:46.234 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:46.234 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.234 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:46.234 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:46.234 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:46.234 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.234 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:46.234 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.234 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.234 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.234 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.234 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:46.234 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:46.234 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:46.234 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.234 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.234 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:46.234 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.234 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:46.234 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:46.234 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:46.234 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:46.234 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.234 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.495 nvme0n1 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFlYjM0YzRiYzMwOTNmNjI4YTVhMTkwYWQ2OTkyODJDF6md: 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFlYjM0YzRiYzMwOTNmNjI4YTVhMTkwYWQ2OTkyODJDF6md: 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: ]] 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.495 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.756 nvme0n1 00:28:46.756 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.756 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.756 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.756 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.756 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.756 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.756 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.756 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.756 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.756 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.756 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.756 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.756 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:46.756 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.756 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:46.756 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:46.756 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:46.756 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDM2MzI4NTk3ZmZmZWZiYzE5MzUyNzhiNmMwNmQ4NjQ4OWNmNTgxN2NlZWY4MTcyjQfTqA==: 00:28:46.756 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: 00:28:46.756 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:46.757 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:46.757 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDM2MzI4NTk3ZmZmZWZiYzE5MzUyNzhiNmMwNmQ4NjQ4OWNmNTgxN2NlZWY4MTcyjQfTqA==: 00:28:46.757 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: ]] 00:28:46.757 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: 00:28:46.757 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:46.757 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.757 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:46.757 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:46.757 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:46.757 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.757 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:46.757 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.757 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.757 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.757 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.757 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:46.757 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:46.757 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:46.757 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.757 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.757 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:46.757 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.757 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:46.757 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:46.757 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:46.757 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:46.757 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.757 08:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.018 nvme0n1 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDA1ZjA3MmRkYjU4MjFkNTY5YTY0M2YwODUzOWJmZTdWmlMo: 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDA1ZjA3MmRkYjU4MjFkNTY5YTY0M2YwODUzOWJmZTdWmlMo: 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: ]] 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.018 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.279 nvme0n1 00:28:47.279 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.279 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.279 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.279 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.279 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.279 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.279 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.279 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.279 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.279 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.279 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.279 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.279 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:47.279 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.279 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:47.279 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:47.279 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:47.279 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNhNThiZDIxNmFhZTkxMWRkYzVhNzY0MDZkM2E1ZTA4N2RjOTYyNDVkZmVkNjMyS1GskQ==: 00:28:47.279 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: 00:28:47.279 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:47.279 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:47.279 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNhNThiZDIxNmFhZTkxMWRkYzVhNzY0MDZkM2E1ZTA4N2RjOTYyNDVkZmVkNjMyS1GskQ==: 00:28:47.279 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: ]] 00:28:47.279 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: 00:28:47.280 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:47.280 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.280 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:47.280 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:47.280 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:47.280 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.280 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:47.280 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.280 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.280 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.280 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.280 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:47.280 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:47.280 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:47.280 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.280 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.280 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:47.280 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.280 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:47.280 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:47.280 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:47.280 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:47.280 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.280 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.541 nvme0n1 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjRmMWMyZjMwYWI2YTg3ZDljMGE3ZjE3Zjg2YTk3MjRjZDcwYTU1NTQ5Mjg5OTQ5Yjg2MzM0YWQ4YjM0MDM2Ztgazkc=: 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjRmMWMyZjMwYWI2YTg3ZDljMGE3ZjE3Zjg2YTk3MjRjZDcwYTU1NTQ5Mjg5OTQ5Yjg2MzM0YWQ4YjM0MDM2Ztgazkc=: 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.541 nvme0n1 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.541 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.803 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.803 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.803 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.803 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.803 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFlYjM0YzRiYzMwOTNmNjI4YTVhMTkwYWQ2OTkyODJDF6md: 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFlYjM0YzRiYzMwOTNmNjI4YTVhMTkwYWQ2OTkyODJDF6md: 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: ]] 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.804 08:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.804 nvme0n1 00:28:47.804 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.804 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.804 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.804 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.804 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDM2MzI4NTk3ZmZmZWZiYzE5MzUyNzhiNmMwNmQ4NjQ4OWNmNTgxN2NlZWY4MTcyjQfTqA==: 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDM2MzI4NTk3ZmZmZWZiYzE5MzUyNzhiNmMwNmQ4NjQ4OWNmNTgxN2NlZWY4MTcyjQfTqA==: 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: ]] 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.064 nvme0n1 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.064 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDA1ZjA3MmRkYjU4MjFkNTY5YTY0M2YwODUzOWJmZTdWmlMo: 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDA1ZjA3MmRkYjU4MjFkNTY5YTY0M2YwODUzOWJmZTdWmlMo: 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: ]] 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.325 nvme0n1 00:28:48.325 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNhNThiZDIxNmFhZTkxMWRkYzVhNzY0MDZkM2E1ZTA4N2RjOTYyNDVkZmVkNjMyS1GskQ==: 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNhNThiZDIxNmFhZTkxMWRkYzVhNzY0MDZkM2E1ZTA4N2RjOTYyNDVkZmVkNjMyS1GskQ==: 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: ]] 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.587 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:48.588 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.588 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:48.588 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:48.588 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:48.588 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:48.588 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.588 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.850 nvme0n1 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjRmMWMyZjMwYWI2YTg3ZDljMGE3ZjE3Zjg2YTk3MjRjZDcwYTU1NTQ5Mjg5OTQ5Yjg2MzM0YWQ4YjM0MDM2Ztgazkc=: 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjRmMWMyZjMwYWI2YTg3ZDljMGE3ZjE3Zjg2YTk3MjRjZDcwYTU1NTQ5Mjg5OTQ5Yjg2MzM0YWQ4YjM0MDM2Ztgazkc=: 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:48.850 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.851 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:48.851 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:48.851 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:48.851 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:48.851 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.851 08:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.851 nvme0n1 00:28:48.851 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.112 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.112 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.112 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.112 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.112 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.112 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.112 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.112 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.112 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.112 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.112 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:49.112 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.112 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:49.112 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.112 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:49.112 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:49.112 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:49.112 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFlYjM0YzRiYzMwOTNmNjI4YTVhMTkwYWQ2OTkyODJDF6md: 00:28:49.112 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: 00:28:49.112 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:49.113 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:49.113 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFlYjM0YzRiYzMwOTNmNjI4YTVhMTkwYWQ2OTkyODJDF6md: 00:28:49.113 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: ]] 00:28:49.113 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: 00:28:49.113 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:49.113 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.113 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:49.113 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:49.113 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:49.113 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.113 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:49.113 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.113 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.113 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.113 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.113 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:49.113 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:49.113 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:49.113 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.113 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.113 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:49.113 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.113 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:49.113 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:49.113 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:49.113 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:49.113 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.113 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.374 nvme0n1 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDM2MzI4NTk3ZmZmZWZiYzE5MzUyNzhiNmMwNmQ4NjQ4OWNmNTgxN2NlZWY4MTcyjQfTqA==: 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDM2MzI4NTk3ZmZmZWZiYzE5MzUyNzhiNmMwNmQ4NjQ4OWNmNTgxN2NlZWY4MTcyjQfTqA==: 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: ]] 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.374 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.635 nvme0n1 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDA1ZjA3MmRkYjU4MjFkNTY5YTY0M2YwODUzOWJmZTdWmlMo: 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDA1ZjA3MmRkYjU4MjFkNTY5YTY0M2YwODUzOWJmZTdWmlMo: 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: ]] 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.635 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.636 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:49.636 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.636 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:49.636 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:49.636 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:49.636 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:49.636 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.636 08:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.897 nvme0n1 00:28:49.897 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.897 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.897 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.897 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.897 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.897 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNhNThiZDIxNmFhZTkxMWRkYzVhNzY0MDZkM2E1ZTA4N2RjOTYyNDVkZmVkNjMyS1GskQ==: 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNhNThiZDIxNmFhZTkxMWRkYzVhNzY0MDZkM2E1ZTA4N2RjOTYyNDVkZmVkNjMyS1GskQ==: 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: ]] 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.159 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.420 nvme0n1 00:28:50.420 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.420 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.420 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.420 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.420 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.420 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.420 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.420 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.420 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.420 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.420 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.421 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.421 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:50.421 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.421 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:50.421 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:50.421 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:50.421 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjRmMWMyZjMwYWI2YTg3ZDljMGE3ZjE3Zjg2YTk3MjRjZDcwYTU1NTQ5Mjg5OTQ5Yjg2MzM0YWQ4YjM0MDM2Ztgazkc=: 00:28:50.421 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:50.421 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:50.421 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:50.421 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjRmMWMyZjMwYWI2YTg3ZDljMGE3ZjE3Zjg2YTk3MjRjZDcwYTU1NTQ5Mjg5OTQ5Yjg2MzM0YWQ4YjM0MDM2Ztgazkc=: 00:28:50.421 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:50.421 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:50.421 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.421 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:50.421 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:50.421 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:50.421 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.421 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:50.421 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.421 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.421 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.421 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.421 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:50.421 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:50.421 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:50.421 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.421 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.421 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:50.421 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.421 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:50.421 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:50.421 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:50.421 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:50.421 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.421 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.682 nvme0n1 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFlYjM0YzRiYzMwOTNmNjI4YTVhMTkwYWQ2OTkyODJDF6md: 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFlYjM0YzRiYzMwOTNmNjI4YTVhMTkwYWQ2OTkyODJDF6md: 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: ]] 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.682 08:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.254 nvme0n1 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDM2MzI4NTk3ZmZmZWZiYzE5MzUyNzhiNmMwNmQ4NjQ4OWNmNTgxN2NlZWY4MTcyjQfTqA==: 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDM2MzI4NTk3ZmZmZWZiYzE5MzUyNzhiNmMwNmQ4NjQ4OWNmNTgxN2NlZWY4MTcyjQfTqA==: 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: ]] 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.254 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.827 nvme0n1 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDA1ZjA3MmRkYjU4MjFkNTY5YTY0M2YwODUzOWJmZTdWmlMo: 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDA1ZjA3MmRkYjU4MjFkNTY5YTY0M2YwODUzOWJmZTdWmlMo: 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: ]] 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.827 08:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.088 nvme0n1 00:28:52.088 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.088 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.088 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.088 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.088 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.088 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.088 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.088 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.088 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.088 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.348 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNhNThiZDIxNmFhZTkxMWRkYzVhNzY0MDZkM2E1ZTA4N2RjOTYyNDVkZmVkNjMyS1GskQ==: 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNhNThiZDIxNmFhZTkxMWRkYzVhNzY0MDZkM2E1ZTA4N2RjOTYyNDVkZmVkNjMyS1GskQ==: 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: ]] 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.349 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.609 nvme0n1 00:28:52.610 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.610 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.610 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.610 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.610 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.610 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.610 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.610 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.610 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.610 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.610 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.610 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.610 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:52.610 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.610 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:52.610 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:52.610 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:52.610 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjRmMWMyZjMwYWI2YTg3ZDljMGE3ZjE3Zjg2YTk3MjRjZDcwYTU1NTQ5Mjg5OTQ5Yjg2MzM0YWQ4YjM0MDM2Ztgazkc=: 00:28:52.610 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:52.610 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:52.610 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:52.610 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjRmMWMyZjMwYWI2YTg3ZDljMGE3ZjE3Zjg2YTk3MjRjZDcwYTU1NTQ5Mjg5OTQ5Yjg2MzM0YWQ4YjM0MDM2Ztgazkc=: 00:28:52.610 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:52.610 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:52.610 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.610 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:52.610 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:52.610 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:52.610 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.610 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:52.610 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.610 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.870 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.870 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.870 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:52.870 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:52.870 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:52.870 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.870 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.870 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:52.870 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.870 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:52.870 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:52.870 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:52.870 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:52.870 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.870 08:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.131 nvme0n1 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFlYjM0YzRiYzMwOTNmNjI4YTVhMTkwYWQ2OTkyODJDF6md: 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFlYjM0YzRiYzMwOTNmNjI4YTVhMTkwYWQ2OTkyODJDF6md: 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: ]] 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.132 08:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.076 nvme0n1 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDM2MzI4NTk3ZmZmZWZiYzE5MzUyNzhiNmMwNmQ4NjQ4OWNmNTgxN2NlZWY4MTcyjQfTqA==: 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDM2MzI4NTk3ZmZmZWZiYzE5MzUyNzhiNmMwNmQ4NjQ4OWNmNTgxN2NlZWY4MTcyjQfTqA==: 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: ]] 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.076 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.649 nvme0n1 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDA1ZjA3MmRkYjU4MjFkNTY5YTY0M2YwODUzOWJmZTdWmlMo: 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDA1ZjA3MmRkYjU4MjFkNTY5YTY0M2YwODUzOWJmZTdWmlMo: 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: ]] 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.649 08:27:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.223 nvme0n1 00:28:55.223 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.223 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.223 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:55.223 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.223 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNhNThiZDIxNmFhZTkxMWRkYzVhNzY0MDZkM2E1ZTA4N2RjOTYyNDVkZmVkNjMyS1GskQ==: 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNhNThiZDIxNmFhZTkxMWRkYzVhNzY0MDZkM2E1ZTA4N2RjOTYyNDVkZmVkNjMyS1GskQ==: 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: ]] 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.484 08:27:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.056 nvme0n1 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjRmMWMyZjMwYWI2YTg3ZDljMGE3ZjE3Zjg2YTk3MjRjZDcwYTU1NTQ5Mjg5OTQ5Yjg2MzM0YWQ4YjM0MDM2Ztgazkc=: 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjRmMWMyZjMwYWI2YTg3ZDljMGE3ZjE3Zjg2YTk3MjRjZDcwYTU1NTQ5Mjg5OTQ5Yjg2MzM0YWQ4YjM0MDM2Ztgazkc=: 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.056 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.999 nvme0n1 00:28:56.999 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.999 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.999 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:56.999 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.999 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.999 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.999 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.999 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.999 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.999 08:27:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.999 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.999 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:56.999 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:56.999 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:56.999 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:56.999 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:56.999 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:56.999 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:56.999 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:56.999 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFlYjM0YzRiYzMwOTNmNjI4YTVhMTkwYWQ2OTkyODJDF6md: 00:28:56.999 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: 00:28:56.999 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:56.999 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFlYjM0YzRiYzMwOTNmNjI4YTVhMTkwYWQ2OTkyODJDF6md: 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: ]] 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.000 nvme0n1 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDM2MzI4NTk3ZmZmZWZiYzE5MzUyNzhiNmMwNmQ4NjQ4OWNmNTgxN2NlZWY4MTcyjQfTqA==: 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDM2MzI4NTk3ZmZmZWZiYzE5MzUyNzhiNmMwNmQ4NjQ4OWNmNTgxN2NlZWY4MTcyjQfTqA==: 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: ]] 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.000 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.261 nvme0n1 00:28:57.261 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.261 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.261 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:57.261 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.261 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.261 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.261 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.261 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:57.261 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.261 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.261 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.261 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:57.261 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:57.261 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:57.261 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:57.261 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:57.261 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:57.261 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDA1ZjA3MmRkYjU4MjFkNTY5YTY0M2YwODUzOWJmZTdWmlMo: 00:28:57.261 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: 00:28:57.261 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:57.261 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:57.261 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDA1ZjA3MmRkYjU4MjFkNTY5YTY0M2YwODUzOWJmZTdWmlMo: 00:28:57.261 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: ]] 00:28:57.262 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: 00:28:57.262 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:57.262 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:57.262 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:57.262 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:57.262 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:57.262 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:57.262 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:57.262 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.262 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.262 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.262 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:57.262 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:57.262 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:57.262 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:57.262 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.262 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.262 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:57.262 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.262 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:57.262 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:57.262 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:57.262 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:57.262 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.262 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.524 nvme0n1 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNhNThiZDIxNmFhZTkxMWRkYzVhNzY0MDZkM2E1ZTA4N2RjOTYyNDVkZmVkNjMyS1GskQ==: 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNhNThiZDIxNmFhZTkxMWRkYzVhNzY0MDZkM2E1ZTA4N2RjOTYyNDVkZmVkNjMyS1GskQ==: 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: ]] 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.524 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.786 nvme0n1 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjRmMWMyZjMwYWI2YTg3ZDljMGE3ZjE3Zjg2YTk3MjRjZDcwYTU1NTQ5Mjg5OTQ5Yjg2MzM0YWQ4YjM0MDM2Ztgazkc=: 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjRmMWMyZjMwYWI2YTg3ZDljMGE3ZjE3Zjg2YTk3MjRjZDcwYTU1NTQ5Mjg5OTQ5Yjg2MzM0YWQ4YjM0MDM2Ztgazkc=: 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.786 08:27:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.047 nvme0n1 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFlYjM0YzRiYzMwOTNmNjI4YTVhMTkwYWQ2OTkyODJDF6md: 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFlYjM0YzRiYzMwOTNmNjI4YTVhMTkwYWQ2OTkyODJDF6md: 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: ]] 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.047 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.308 nvme0n1 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDM2MzI4NTk3ZmZmZWZiYzE5MzUyNzhiNmMwNmQ4NjQ4OWNmNTgxN2NlZWY4MTcyjQfTqA==: 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDM2MzI4NTk3ZmZmZWZiYzE5MzUyNzhiNmMwNmQ4NjQ4OWNmNTgxN2NlZWY4MTcyjQfTqA==: 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: ]] 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.308 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.570 nvme0n1 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDA1ZjA3MmRkYjU4MjFkNTY5YTY0M2YwODUzOWJmZTdWmlMo: 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDA1ZjA3MmRkYjU4MjFkNTY5YTY0M2YwODUzOWJmZTdWmlMo: 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: ]] 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.570 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.833 nvme0n1 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNhNThiZDIxNmFhZTkxMWRkYzVhNzY0MDZkM2E1ZTA4N2RjOTYyNDVkZmVkNjMyS1GskQ==: 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNhNThiZDIxNmFhZTkxMWRkYzVhNzY0MDZkM2E1ZTA4N2RjOTYyNDVkZmVkNjMyS1GskQ==: 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: ]] 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.833 08:27:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.095 nvme0n1 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjRmMWMyZjMwYWI2YTg3ZDljMGE3ZjE3Zjg2YTk3MjRjZDcwYTU1NTQ5Mjg5OTQ5Yjg2MzM0YWQ4YjM0MDM2Ztgazkc=: 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjRmMWMyZjMwYWI2YTg3ZDljMGE3ZjE3Zjg2YTk3MjRjZDcwYTU1NTQ5Mjg5OTQ5Yjg2MzM0YWQ4YjM0MDM2Ztgazkc=: 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.095 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.366 nvme0n1 00:28:59.366 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.366 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.366 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.366 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:59.366 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.366 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.366 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.366 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.366 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.366 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.366 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.366 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:59.366 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:59.366 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:59.366 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:59.366 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:59.366 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:59.366 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:59.366 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFlYjM0YzRiYzMwOTNmNjI4YTVhMTkwYWQ2OTkyODJDF6md: 00:28:59.367 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: 00:28:59.367 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:59.367 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:59.367 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFlYjM0YzRiYzMwOTNmNjI4YTVhMTkwYWQ2OTkyODJDF6md: 00:28:59.367 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: ]] 00:28:59.367 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: 00:28:59.367 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:59.367 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:59.367 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:59.367 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:59.367 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:59.367 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:59.367 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:59.367 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.367 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.367 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.367 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:59.367 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:59.367 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:59.367 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:59.367 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:59.367 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:59.367 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:59.367 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:59.367 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:59.367 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:59.367 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:59.367 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:59.367 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.367 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.628 nvme0n1 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDM2MzI4NTk3ZmZmZWZiYzE5MzUyNzhiNmMwNmQ4NjQ4OWNmNTgxN2NlZWY4MTcyjQfTqA==: 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDM2MzI4NTk3ZmZmZWZiYzE5MzUyNzhiNmMwNmQ4NjQ4OWNmNTgxN2NlZWY4MTcyjQfTqA==: 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: ]] 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:59.628 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:59.629 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:59.629 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:59.629 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:59.629 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:59.629 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:59.629 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:59.629 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:59.629 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.629 08:27:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.891 nvme0n1 00:28:59.891 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.891 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.891 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:59.891 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.891 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.891 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.891 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.891 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.891 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.891 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.891 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.891 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:59.891 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:59.891 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:59.891 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:59.891 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:59.891 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:59.891 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDA1ZjA3MmRkYjU4MjFkNTY5YTY0M2YwODUzOWJmZTdWmlMo: 00:28:59.891 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: 00:28:59.891 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:59.891 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:59.891 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDA1ZjA3MmRkYjU4MjFkNTY5YTY0M2YwODUzOWJmZTdWmlMo: 00:28:59.891 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: ]] 00:28:59.891 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: 00:28:59.891 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:59.891 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:59.891 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:59.891 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:59.891 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:59.891 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:59.891 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:59.891 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.891 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.153 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.153 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:00.153 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:00.153 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:00.153 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:00.153 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.153 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.153 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:00.153 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.153 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:00.153 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:00.153 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:00.153 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:00.153 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.153 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.414 nvme0n1 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNhNThiZDIxNmFhZTkxMWRkYzVhNzY0MDZkM2E1ZTA4N2RjOTYyNDVkZmVkNjMyS1GskQ==: 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNhNThiZDIxNmFhZTkxMWRkYzVhNzY0MDZkM2E1ZTA4N2RjOTYyNDVkZmVkNjMyS1GskQ==: 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: ]] 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.414 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.676 nvme0n1 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjRmMWMyZjMwYWI2YTg3ZDljMGE3ZjE3Zjg2YTk3MjRjZDcwYTU1NTQ5Mjg5OTQ5Yjg2MzM0YWQ4YjM0MDM2Ztgazkc=: 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjRmMWMyZjMwYWI2YTg3ZDljMGE3ZjE3Zjg2YTk3MjRjZDcwYTU1NTQ5Mjg5OTQ5Yjg2MzM0YWQ4YjM0MDM2Ztgazkc=: 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.676 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:00.677 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:00.677 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:00.677 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:00.677 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.677 08:27:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.937 nvme0n1 00:29:00.937 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.937 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.937 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:00.937 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.937 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.937 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.937 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:00.937 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.937 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.937 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.937 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.937 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:00.937 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:00.937 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:29:00.937 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.937 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:00.937 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:00.937 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:00.937 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFlYjM0YzRiYzMwOTNmNjI4YTVhMTkwYWQ2OTkyODJDF6md: 00:29:00.937 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: 00:29:00.937 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:00.937 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:00.937 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFlYjM0YzRiYzMwOTNmNjI4YTVhMTkwYWQ2OTkyODJDF6md: 00:29:00.937 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: ]] 00:29:00.937 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: 00:29:00.937 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:29:00.937 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:00.937 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:00.937 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:00.937 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:00.937 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:00.938 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:00.938 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.938 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.938 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.938 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:00.938 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:00.938 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:00.938 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:00.938 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.938 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.938 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:00.938 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.938 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:00.938 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:00.938 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:00.938 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:00.938 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.938 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.509 nvme0n1 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDM2MzI4NTk3ZmZmZWZiYzE5MzUyNzhiNmMwNmQ4NjQ4OWNmNTgxN2NlZWY4MTcyjQfTqA==: 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDM2MzI4NTk3ZmZmZWZiYzE5MzUyNzhiNmMwNmQ4NjQ4OWNmNTgxN2NlZWY4MTcyjQfTqA==: 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: ]] 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.509 08:27:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.083 nvme0n1 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDA1ZjA3MmRkYjU4MjFkNTY5YTY0M2YwODUzOWJmZTdWmlMo: 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDA1ZjA3MmRkYjU4MjFkNTY5YTY0M2YwODUzOWJmZTdWmlMo: 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: ]] 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.083 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.344 nvme0n1 00:29:02.344 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.344 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:02.344 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.344 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:02.344 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.604 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.604 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:02.604 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:02.604 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.604 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.604 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.604 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:02.604 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:29:02.604 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:02.604 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:02.604 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:02.604 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:02.604 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNhNThiZDIxNmFhZTkxMWRkYzVhNzY0MDZkM2E1ZTA4N2RjOTYyNDVkZmVkNjMyS1GskQ==: 00:29:02.604 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: 00:29:02.605 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:02.605 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:02.605 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNhNThiZDIxNmFhZTkxMWRkYzVhNzY0MDZkM2E1ZTA4N2RjOTYyNDVkZmVkNjMyS1GskQ==: 00:29:02.605 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: ]] 00:29:02.605 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: 00:29:02.605 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:29:02.605 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:02.605 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:02.605 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:02.605 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:02.605 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:02.605 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:02.605 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.605 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.605 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.605 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:02.605 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:02.605 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:02.605 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:02.605 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:02.605 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:02.605 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:02.605 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:02.605 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:02.605 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:02.605 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:02.605 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:02.605 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.605 08:27:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.865 nvme0n1 00:29:02.865 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.865 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:02.865 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:02.865 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.865 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.865 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.125 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.125 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:03.125 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.125 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.125 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.125 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:03.125 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:29:03.125 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.125 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:03.125 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:03.125 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:03.125 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjRmMWMyZjMwYWI2YTg3ZDljMGE3ZjE3Zjg2YTk3MjRjZDcwYTU1NTQ5Mjg5OTQ5Yjg2MzM0YWQ4YjM0MDM2Ztgazkc=: 00:29:03.125 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:03.125 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:03.125 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:03.126 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjRmMWMyZjMwYWI2YTg3ZDljMGE3ZjE3Zjg2YTk3MjRjZDcwYTU1NTQ5Mjg5OTQ5Yjg2MzM0YWQ4YjM0MDM2Ztgazkc=: 00:29:03.126 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:03.126 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:29:03.126 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:03.126 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:03.126 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:03.126 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:03.126 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:03.126 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:29:03.126 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.126 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.126 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.126 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:03.126 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:03.126 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:03.126 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:03.126 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.126 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.126 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:03.126 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:03.126 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:03.126 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:03.126 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:03.126 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:03.126 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.126 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.387 nvme0n1 00:29:03.387 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.387 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.387 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:03.387 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.387 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.387 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.387 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.387 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:03.387 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.387 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFlYjM0YzRiYzMwOTNmNjI4YTVhMTkwYWQ2OTkyODJDF6md: 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFlYjM0YzRiYzMwOTNmNjI4YTVhMTkwYWQ2OTkyODJDF6md: 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: ]] 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.649 08:28:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.221 nvme0n1 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDM2MzI4NTk3ZmZmZWZiYzE5MzUyNzhiNmMwNmQ4NjQ4OWNmNTgxN2NlZWY4MTcyjQfTqA==: 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDM2MzI4NTk3ZmZmZWZiYzE5MzUyNzhiNmMwNmQ4NjQ4OWNmNTgxN2NlZWY4MTcyjQfTqA==: 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: ]] 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.221 08:28:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.793 nvme0n1 00:29:04.793 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.793 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.793 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.793 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:04.793 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.793 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.052 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.052 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.052 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.052 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.052 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.052 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.052 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:29:05.052 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.052 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:05.052 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:05.052 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:05.052 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDA1ZjA3MmRkYjU4MjFkNTY5YTY0M2YwODUzOWJmZTdWmlMo: 00:29:05.052 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: 00:29:05.052 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:05.052 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:05.052 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDA1ZjA3MmRkYjU4MjFkNTY5YTY0M2YwODUzOWJmZTdWmlMo: 00:29:05.052 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: ]] 00:29:05.052 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: 00:29:05.052 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:29:05.052 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.052 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:05.052 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:05.052 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:05.052 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.052 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:05.052 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.052 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.052 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.053 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.053 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:05.053 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:05.053 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:05.053 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.053 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.053 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:05.053 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.053 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:05.053 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:05.053 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:05.053 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:05.053 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.053 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.623 nvme0n1 00:29:05.623 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.623 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.623 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.623 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.623 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.623 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.623 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.623 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.623 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.623 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.623 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.623 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.623 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:29:05.623 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.623 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:05.623 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:05.623 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:05.623 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNhNThiZDIxNmFhZTkxMWRkYzVhNzY0MDZkM2E1ZTA4N2RjOTYyNDVkZmVkNjMyS1GskQ==: 00:29:05.624 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: 00:29:05.624 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:05.624 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:05.624 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNhNThiZDIxNmFhZTkxMWRkYzVhNzY0MDZkM2E1ZTA4N2RjOTYyNDVkZmVkNjMyS1GskQ==: 00:29:05.624 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: ]] 00:29:05.624 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: 00:29:05.624 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:29:05.624 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.624 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:05.624 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:05.624 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:05.624 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.624 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:05.624 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.624 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.624 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.624 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.624 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:05.624 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:05.624 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:05.624 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.624 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.624 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:05.624 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.624 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:05.624 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:05.624 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:05.624 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:05.624 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.624 08:28:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.618 nvme0n1 00:29:06.618 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.618 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjRmMWMyZjMwYWI2YTg3ZDljMGE3ZjE3Zjg2YTk3MjRjZDcwYTU1NTQ5Mjg5OTQ5Yjg2MzM0YWQ4YjM0MDM2Ztgazkc=: 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjRmMWMyZjMwYWI2YTg3ZDljMGE3ZjE3Zjg2YTk3MjRjZDcwYTU1NTQ5Mjg5OTQ5Yjg2MzM0YWQ4YjM0MDM2Ztgazkc=: 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.619 08:28:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.191 nvme0n1 00:29:07.191 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.191 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.191 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.191 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.191 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.191 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.191 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.191 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.191 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.191 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.191 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.191 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:07.191 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:07.191 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.191 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:29:07.191 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.191 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:07.191 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:07.191 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:07.191 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFlYjM0YzRiYzMwOTNmNjI4YTVhMTkwYWQ2OTkyODJDF6md: 00:29:07.191 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: 00:29:07.191 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:07.191 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:07.191 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFlYjM0YzRiYzMwOTNmNjI4YTVhMTkwYWQ2OTkyODJDF6md: 00:29:07.191 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: ]] 00:29:07.191 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: 00:29:07.191 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:29:07.191 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.191 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:07.191 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:07.191 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:07.191 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.191 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:07.192 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.192 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.192 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.192 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.192 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:07.192 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:07.192 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:07.192 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.192 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.192 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:07.192 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.192 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:07.192 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:07.192 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:07.192 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:07.192 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.192 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.192 nvme0n1 00:29:07.192 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.192 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.192 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.192 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.192 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.453 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.453 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.453 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.453 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.453 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.453 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.453 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.453 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:29:07.453 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.453 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:07.453 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:07.453 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:07.453 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDM2MzI4NTk3ZmZmZWZiYzE5MzUyNzhiNmMwNmQ4NjQ4OWNmNTgxN2NlZWY4MTcyjQfTqA==: 00:29:07.453 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: 00:29:07.453 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:07.453 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:07.453 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDM2MzI4NTk3ZmZmZWZiYzE5MzUyNzhiNmMwNmQ4NjQ4OWNmNTgxN2NlZWY4MTcyjQfTqA==: 00:29:07.453 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: ]] 00:29:07.453 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: 00:29:07.453 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:29:07.453 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.453 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:07.453 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:07.453 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:07.453 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.453 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:07.453 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.453 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.453 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.453 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.453 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:07.454 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:07.454 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:07.454 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.454 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.454 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:07.454 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.454 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:07.454 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:07.454 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:07.454 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:07.454 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.454 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.454 nvme0n1 00:29:07.454 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.454 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.454 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.454 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.454 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.454 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDA1ZjA3MmRkYjU4MjFkNTY5YTY0M2YwODUzOWJmZTdWmlMo: 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDA1ZjA3MmRkYjU4MjFkNTY5YTY0M2YwODUzOWJmZTdWmlMo: 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: ]] 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.715 nvme0n1 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.715 08:28:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.988 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.988 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.988 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:29:07.988 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.988 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:07.988 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:07.988 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:07.988 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNhNThiZDIxNmFhZTkxMWRkYzVhNzY0MDZkM2E1ZTA4N2RjOTYyNDVkZmVkNjMyS1GskQ==: 00:29:07.988 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: 00:29:07.988 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:07.988 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:07.988 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNhNThiZDIxNmFhZTkxMWRkYzVhNzY0MDZkM2E1ZTA4N2RjOTYyNDVkZmVkNjMyS1GskQ==: 00:29:07.988 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: ]] 00:29:07.988 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: 00:29:07.988 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.989 nvme0n1 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjRmMWMyZjMwYWI2YTg3ZDljMGE3ZjE3Zjg2YTk3MjRjZDcwYTU1NTQ5Mjg5OTQ5Yjg2MzM0YWQ4YjM0MDM2Ztgazkc=: 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjRmMWMyZjMwYWI2YTg3ZDljMGE3ZjE3Zjg2YTk3MjRjZDcwYTU1NTQ5Mjg5OTQ5Yjg2MzM0YWQ4YjM0MDM2Ztgazkc=: 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.989 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.251 nvme0n1 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFlYjM0YzRiYzMwOTNmNjI4YTVhMTkwYWQ2OTkyODJDF6md: 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFlYjM0YzRiYzMwOTNmNjI4YTVhMTkwYWQ2OTkyODJDF6md: 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: ]] 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.251 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.512 nvme0n1 00:29:08.512 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.512 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.512 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.512 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.512 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.512 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.512 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.512 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDM2MzI4NTk3ZmZmZWZiYzE5MzUyNzhiNmMwNmQ4NjQ4OWNmNTgxN2NlZWY4MTcyjQfTqA==: 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDM2MzI4NTk3ZmZmZWZiYzE5MzUyNzhiNmMwNmQ4NjQ4OWNmNTgxN2NlZWY4MTcyjQfTqA==: 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: ]] 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.513 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.774 nvme0n1 00:29:08.774 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.774 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.774 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.774 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.774 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.774 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.774 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.774 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.774 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.774 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.774 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.774 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.774 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:29:08.774 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.774 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:08.774 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:08.774 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:08.774 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDA1ZjA3MmRkYjU4MjFkNTY5YTY0M2YwODUzOWJmZTdWmlMo: 00:29:08.774 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: 00:29:08.774 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:08.774 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:08.774 08:28:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDA1ZjA3MmRkYjU4MjFkNTY5YTY0M2YwODUzOWJmZTdWmlMo: 00:29:08.774 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: ]] 00:29:08.774 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: 00:29:08.774 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:29:08.774 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.774 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:08.774 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:08.774 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:08.774 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.774 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:08.774 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.774 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.774 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.774 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.774 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:08.774 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:08.774 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:08.774 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.774 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.774 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:08.774 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.774 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:08.774 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:08.775 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:08.775 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:08.775 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.775 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.036 nvme0n1 00:29:09.036 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.036 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.036 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:09.036 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.036 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.036 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.036 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNhNThiZDIxNmFhZTkxMWRkYzVhNzY0MDZkM2E1ZTA4N2RjOTYyNDVkZmVkNjMyS1GskQ==: 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNhNThiZDIxNmFhZTkxMWRkYzVhNzY0MDZkM2E1ZTA4N2RjOTYyNDVkZmVkNjMyS1GskQ==: 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: ]] 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.037 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.299 nvme0n1 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjRmMWMyZjMwYWI2YTg3ZDljMGE3ZjE3Zjg2YTk3MjRjZDcwYTU1NTQ5Mjg5OTQ5Yjg2MzM0YWQ4YjM0MDM2Ztgazkc=: 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjRmMWMyZjMwYWI2YTg3ZDljMGE3ZjE3Zjg2YTk3MjRjZDcwYTU1NTQ5Mjg5OTQ5Yjg2MzM0YWQ4YjM0MDM2Ztgazkc=: 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.299 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.562 nvme0n1 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFlYjM0YzRiYzMwOTNmNjI4YTVhMTkwYWQ2OTkyODJDF6md: 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFlYjM0YzRiYzMwOTNmNjI4YTVhMTkwYWQ2OTkyODJDF6md: 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: ]] 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.562 08:28:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.823 nvme0n1 00:29:09.823 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.823 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.823 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.823 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:09.823 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.823 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.823 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.823 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.823 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.823 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDM2MzI4NTk3ZmZmZWZiYzE5MzUyNzhiNmMwNmQ4NjQ4OWNmNTgxN2NlZWY4MTcyjQfTqA==: 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDM2MzI4NTk3ZmZmZWZiYzE5MzUyNzhiNmMwNmQ4NjQ4OWNmNTgxN2NlZWY4MTcyjQfTqA==: 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: ]] 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.085 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.345 nvme0n1 00:29:10.345 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.345 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.345 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDA1ZjA3MmRkYjU4MjFkNTY5YTY0M2YwODUzOWJmZTdWmlMo: 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDA1ZjA3MmRkYjU4MjFkNTY5YTY0M2YwODUzOWJmZTdWmlMo: 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: ]] 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.346 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.607 nvme0n1 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNhNThiZDIxNmFhZTkxMWRkYzVhNzY0MDZkM2E1ZTA4N2RjOTYyNDVkZmVkNjMyS1GskQ==: 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNhNThiZDIxNmFhZTkxMWRkYzVhNzY0MDZkM2E1ZTA4N2RjOTYyNDVkZmVkNjMyS1GskQ==: 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: ]] 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.607 08:28:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.868 nvme0n1 00:29:10.868 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.868 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.868 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:10.868 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.868 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.868 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.868 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.868 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.868 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.868 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.868 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.869 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:10.869 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:29:10.869 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.869 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:10.869 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:10.869 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:10.869 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjRmMWMyZjMwYWI2YTg3ZDljMGE3ZjE3Zjg2YTk3MjRjZDcwYTU1NTQ5Mjg5OTQ5Yjg2MzM0YWQ4YjM0MDM2Ztgazkc=: 00:29:10.869 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:10.869 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:10.869 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:10.869 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjRmMWMyZjMwYWI2YTg3ZDljMGE3ZjE3Zjg2YTk3MjRjZDcwYTU1NTQ5Mjg5OTQ5Yjg2MzM0YWQ4YjM0MDM2Ztgazkc=: 00:29:10.869 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:10.869 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:29:10.869 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:10.869 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:10.869 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:10.869 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:10.869 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:10.869 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:10.869 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.869 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.129 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.129 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:11.129 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:11.129 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:11.129 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:11.130 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:11.130 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:11.130 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:11.130 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:11.130 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:11.130 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:11.130 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:11.130 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:11.130 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.130 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.130 nvme0n1 00:29:11.130 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.391 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:11.391 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:11.391 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.391 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.391 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.391 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:11.391 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:11.391 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.391 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.391 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.391 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:11.391 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:11.391 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:29:11.391 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:11.391 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:11.391 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:11.391 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:11.391 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFlYjM0YzRiYzMwOTNmNjI4YTVhMTkwYWQ2OTkyODJDF6md: 00:29:11.391 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: 00:29:11.391 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:11.391 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:11.391 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFlYjM0YzRiYzMwOTNmNjI4YTVhMTkwYWQ2OTkyODJDF6md: 00:29:11.391 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: ]] 00:29:11.391 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: 00:29:11.391 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:29:11.391 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:11.391 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:11.391 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:11.391 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:11.392 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:11.392 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:11.392 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.392 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.392 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.392 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:11.392 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:11.392 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:11.392 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:11.392 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:11.392 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:11.392 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:11.392 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:11.392 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:11.392 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:11.392 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:11.392 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:11.392 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.392 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.652 nvme0n1 00:29:11.652 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.652 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:11.652 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:11.652 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.652 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.652 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDM2MzI4NTk3ZmZmZWZiYzE5MzUyNzhiNmMwNmQ4NjQ4OWNmNTgxN2NlZWY4MTcyjQfTqA==: 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDM2MzI4NTk3ZmZmZWZiYzE5MzUyNzhiNmMwNmQ4NjQ4OWNmNTgxN2NlZWY4MTcyjQfTqA==: 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: ]] 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:11.913 08:28:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:11.913 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.913 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.174 nvme0n1 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDA1ZjA3MmRkYjU4MjFkNTY5YTY0M2YwODUzOWJmZTdWmlMo: 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDA1ZjA3MmRkYjU4MjFkNTY5YTY0M2YwODUzOWJmZTdWmlMo: 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: ]] 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.174 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.746 nvme0n1 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNhNThiZDIxNmFhZTkxMWRkYzVhNzY0MDZkM2E1ZTA4N2RjOTYyNDVkZmVkNjMyS1GskQ==: 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNhNThiZDIxNmFhZTkxMWRkYzVhNzY0MDZkM2E1ZTA4N2RjOTYyNDVkZmVkNjMyS1GskQ==: 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: ]] 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.746 08:28:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.318 nvme0n1 00:29:13.318 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.318 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:13.318 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:13.318 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.318 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.318 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.318 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:13.318 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:13.318 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.318 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.318 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.318 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:13.318 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:29:13.318 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:13.318 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:13.318 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:13.318 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:13.318 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjRmMWMyZjMwYWI2YTg3ZDljMGE3ZjE3Zjg2YTk3MjRjZDcwYTU1NTQ5Mjg5OTQ5Yjg2MzM0YWQ4YjM0MDM2Ztgazkc=: 00:29:13.318 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:13.318 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:13.318 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:13.318 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjRmMWMyZjMwYWI2YTg3ZDljMGE3ZjE3Zjg2YTk3MjRjZDcwYTU1NTQ5Mjg5OTQ5Yjg2MzM0YWQ4YjM0MDM2Ztgazkc=: 00:29:13.318 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:13.318 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:29:13.318 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:13.318 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:13.318 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:13.319 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:13.319 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:13.319 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:13.319 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.319 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.319 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.319 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:13.319 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:13.319 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:13.319 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:13.319 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:13.319 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:13.319 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:13.319 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:13.319 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:13.319 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:13.319 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:13.319 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:13.319 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.319 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.579 nvme0n1 00:29:13.579 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.579 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:13.579 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:13.579 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.579 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFlYjM0YzRiYzMwOTNmNjI4YTVhMTkwYWQ2OTkyODJDF6md: 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFlYjM0YzRiYzMwOTNmNjI4YTVhMTkwYWQ2OTkyODJDF6md: 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: ]] 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWM5OWI5NTc2MmNjZjVlYWJjNDZiZDcwZTlhMmQwNjA1YTk5NDgwYzFmMjljM2U3ZWU5MWJlYzU5YTFhOGVmMYsfkUs=: 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.840 08:28:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.412 nvme0n1 00:29:14.412 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.412 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:14.412 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:14.412 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.412 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.412 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.412 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:14.412 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:14.412 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.412 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.412 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.412 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:14.412 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:29:14.412 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:14.412 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:14.412 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:14.412 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:14.412 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDM2MzI4NTk3ZmZmZWZiYzE5MzUyNzhiNmMwNmQ4NjQ4OWNmNTgxN2NlZWY4MTcyjQfTqA==: 00:29:14.412 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: 00:29:14.412 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:14.412 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:14.412 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDM2MzI4NTk3ZmZmZWZiYzE5MzUyNzhiNmMwNmQ4NjQ4OWNmNTgxN2NlZWY4MTcyjQfTqA==: 00:29:14.412 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: ]] 00:29:14.412 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: 00:29:14.412 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:29:14.412 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:14.412 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:14.412 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:14.412 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:14.412 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:14.412 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:14.412 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.412 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.412 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.412 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:14.412 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:14.413 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:14.413 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:14.413 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:14.413 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:14.413 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:14.413 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:14.413 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:14.413 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:14.413 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:14.413 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:14.413 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.413 08:28:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.355 nvme0n1 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDA1ZjA3MmRkYjU4MjFkNTY5YTY0M2YwODUzOWJmZTdWmlMo: 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDA1ZjA3MmRkYjU4MjFkNTY5YTY0M2YwODUzOWJmZTdWmlMo: 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: ]] 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.355 08:28:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.926 nvme0n1 00:29:15.926 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.926 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:15.926 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:15.926 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.926 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.926 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNhNThiZDIxNmFhZTkxMWRkYzVhNzY0MDZkM2E1ZTA4N2RjOTYyNDVkZmVkNjMyS1GskQ==: 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNhNThiZDIxNmFhZTkxMWRkYzVhNzY0MDZkM2E1ZTA4N2RjOTYyNDVkZmVkNjMyS1GskQ==: 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: ]] 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MmJhZTBiOTQxYTc1MGE5NjhlNGU1Yzg5NWY4MjY1NTmRWvwA: 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.927 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.498 nvme0n1 00:29:16.498 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.498 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:16.498 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:16.498 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.498 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.498 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.758 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:16.758 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:16.758 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.758 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.758 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.758 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:16.758 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:29:16.758 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:16.759 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:16.759 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:16.759 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:16.759 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZjRmMWMyZjMwYWI2YTg3ZDljMGE3ZjE3Zjg2YTk3MjRjZDcwYTU1NTQ5Mjg5OTQ5Yjg2MzM0YWQ4YjM0MDM2Ztgazkc=: 00:29:16.759 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:16.759 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:16.759 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:16.759 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZjRmMWMyZjMwYWI2YTg3ZDljMGE3ZjE3Zjg2YTk3MjRjZDcwYTU1NTQ5Mjg5OTQ5Yjg2MzM0YWQ4YjM0MDM2Ztgazkc=: 00:29:16.759 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:16.759 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:29:16.759 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:16.759 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:16.759 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:16.759 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:16.759 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:16.759 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:16.759 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.759 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.759 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.759 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:16.759 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:16.759 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:16.759 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:16.759 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:16.759 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:16.759 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:16.759 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:16.759 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:16.759 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:16.759 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:16.759 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:16.759 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.759 08:28:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.330 nvme0n1 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDM2MzI4NTk3ZmZmZWZiYzE5MzUyNzhiNmMwNmQ4NjQ4OWNmNTgxN2NlZWY4MTcyjQfTqA==: 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDM2MzI4NTk3ZmZmZWZiYzE5MzUyNzhiNmMwNmQ4NjQ4OWNmNTgxN2NlZWY4MTcyjQfTqA==: 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: ]] 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.330 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.591 request: 00:29:17.591 { 00:29:17.591 "name": "nvme0", 00:29:17.591 "trtype": "tcp", 00:29:17.591 "traddr": "10.0.0.1", 00:29:17.591 "adrfam": "ipv4", 00:29:17.591 "trsvcid": "4420", 00:29:17.591 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:17.591 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:17.591 "prchk_reftag": false, 00:29:17.591 "prchk_guard": false, 00:29:17.591 "hdgst": false, 00:29:17.591 "ddgst": false, 00:29:17.591 "allow_unrecognized_csi": false, 00:29:17.591 "method": "bdev_nvme_attach_controller", 00:29:17.591 "req_id": 1 00:29:17.591 } 00:29:17.591 Got JSON-RPC error response 00:29:17.591 response: 00:29:17.591 { 00:29:17.591 "code": -5, 00:29:17.591 "message": "Input/output error" 00:29:17.591 } 00:29:17.591 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.592 request: 00:29:17.592 { 00:29:17.592 "name": "nvme0", 00:29:17.592 "trtype": "tcp", 00:29:17.592 "traddr": "10.0.0.1", 00:29:17.592 "adrfam": "ipv4", 00:29:17.592 "trsvcid": "4420", 00:29:17.592 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:17.592 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:17.592 "prchk_reftag": false, 00:29:17.592 "prchk_guard": false, 00:29:17.592 "hdgst": false, 00:29:17.592 "ddgst": false, 00:29:17.592 "dhchap_key": "key2", 00:29:17.592 "allow_unrecognized_csi": false, 00:29:17.592 "method": "bdev_nvme_attach_controller", 00:29:17.592 "req_id": 1 00:29:17.592 } 00:29:17.592 Got JSON-RPC error response 00:29:17.592 response: 00:29:17.592 { 00:29:17.592 "code": -5, 00:29:17.592 "message": "Input/output error" 00:29:17.592 } 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.592 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.854 request: 00:29:17.854 { 00:29:17.854 "name": "nvme0", 00:29:17.854 "trtype": "tcp", 00:29:17.854 "traddr": "10.0.0.1", 00:29:17.854 "adrfam": "ipv4", 00:29:17.854 "trsvcid": "4420", 00:29:17.854 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:17.854 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:17.854 "prchk_reftag": false, 00:29:17.854 "prchk_guard": false, 00:29:17.854 "hdgst": false, 00:29:17.854 "ddgst": false, 00:29:17.854 "dhchap_key": "key1", 00:29:17.854 "dhchap_ctrlr_key": "ckey2", 00:29:17.854 "allow_unrecognized_csi": false, 00:29:17.854 "method": "bdev_nvme_attach_controller", 00:29:17.854 "req_id": 1 00:29:17.854 } 00:29:17.854 Got JSON-RPC error response 00:29:17.854 response: 00:29:17.854 { 00:29:17.854 "code": -5, 00:29:17.854 "message": "Input/output error" 00:29:17.854 } 00:29:17.854 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:17.854 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:17.854 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:17.854 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:17.854 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:17.854 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:29:17.854 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:17.854 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:17.854 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:17.854 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.854 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.854 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:17.854 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.854 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:17.854 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:17.854 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:17.854 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:17.854 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.854 08:28:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.854 nvme0n1 00:29:17.854 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.854 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:17.854 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.854 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:17.854 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:17.854 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:17.854 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDA1ZjA3MmRkYjU4MjFkNTY5YTY0M2YwODUzOWJmZTdWmlMo: 00:29:17.854 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: 00:29:17.854 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:17.854 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:17.854 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDA1ZjA3MmRkYjU4MjFkNTY5YTY0M2YwODUzOWJmZTdWmlMo: 00:29:17.854 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: ]] 00:29:17.854 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: 00:29:17.854 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:17.854 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.854 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.854 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.854 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.854 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:29:17.854 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.854 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.854 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.119 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:18.119 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:18.119 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:18.119 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:18.119 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:18.119 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:18.119 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:18.119 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:18.119 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:18.119 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.119 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.119 request: 00:29:18.119 { 00:29:18.119 "name": "nvme0", 00:29:18.119 "dhchap_key": "key1", 00:29:18.119 "dhchap_ctrlr_key": "ckey2", 00:29:18.119 "method": "bdev_nvme_set_keys", 00:29:18.119 "req_id": 1 00:29:18.119 } 00:29:18.119 Got JSON-RPC error response 00:29:18.119 response: 00:29:18.119 { 00:29:18.119 "code": -13, 00:29:18.119 "message": "Permission denied" 00:29:18.119 } 00:29:18.119 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:18.119 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:18.119 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:18.120 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:18.120 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:18.120 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.120 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:18.120 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.120 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.120 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.120 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:29:18.120 08:28:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:29:19.065 08:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:19.065 08:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:19.065 08:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.065 08:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:19.065 08:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.065 08:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:29:19.065 08:28:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:29:20.450 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.450 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:20.450 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.450 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.450 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.450 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:29:20.450 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:20.450 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.450 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:20.450 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDM2MzI4NTk3ZmZmZWZiYzE5MzUyNzhiNmMwNmQ4NjQ4OWNmNTgxN2NlZWY4MTcyjQfTqA==: 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDM2MzI4NTk3ZmZmZWZiYzE5MzUyNzhiNmMwNmQ4NjQ4OWNmNTgxN2NlZWY4MTcyjQfTqA==: 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: ]] 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmJlMGM4ZGQxNzg5MTljZDNmYWMyOGNlNGM1MGYyMzc1NWI3ZjliNWExMjc2OTFm6DEDow==: 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.451 nvme0n1 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDA1ZjA3MmRkYjU4MjFkNTY5YTY0M2YwODUzOWJmZTdWmlMo: 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDA1ZjA3MmRkYjU4MjFkNTY5YTY0M2YwODUzOWJmZTdWmlMo: 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: ]] 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWI0MDQzNGQ2ZTQzYWUyMmJjODRlZmEwM2MxNzNjMzY2BhUr: 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.451 request: 00:29:20.451 { 00:29:20.451 "name": "nvme0", 00:29:20.451 "dhchap_key": "key2", 00:29:20.451 "dhchap_ctrlr_key": "ckey1", 00:29:20.451 "method": "bdev_nvme_set_keys", 00:29:20.451 "req_id": 1 00:29:20.451 } 00:29:20.451 Got JSON-RPC error response 00:29:20.451 response: 00:29:20.451 { 00:29:20.451 "code": -13, 00:29:20.451 "message": "Permission denied" 00:29:20.451 } 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:29:20.451 08:28:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:29:21.394 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:21.394 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:21.394 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.394 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:21.394 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:21.655 rmmod nvme_tcp 00:29:21.655 rmmod nvme_fabrics 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2125554 ']' 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2125554 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2125554 ']' 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2125554 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2125554 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2125554' 00:29:21.655 killing process with pid 2125554 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2125554 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2125554 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:21.655 08:28:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.200 08:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:24.200 08:28:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:24.200 08:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:24.200 08:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:29:24.200 08:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:29:24.200 08:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:29:24.200 08:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:24.200 08:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:24.200 08:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:24.200 08:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:24.200 08:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:29:24.200 08:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:29:24.200 08:28:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:27.502 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:27.503 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:27.503 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:27.503 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:27.503 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:27.503 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:27.503 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:27.503 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:27.503 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:27.503 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:27.503 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:27.503 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:27.503 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:27.503 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:27.503 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:27.503 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:27.503 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:29:28.074 08:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Z13 /tmp/spdk.key-null.ES6 /tmp/spdk.key-sha256.y71 /tmp/spdk.key-sha384.VN1 /tmp/spdk.key-sha512.Anu /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:29:28.074 08:28:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:31.375 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:31.375 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:31.375 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:31.375 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:31.375 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:31.375 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:31.375 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:31.375 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:31.375 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:31.375 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:29:31.375 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:31.375 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:31.375 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:31.375 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:31.375 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:31.375 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:31.375 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:31.946 00:29:31.946 real 1m0.884s 00:29:31.946 user 0m54.800s 00:29:31.946 sys 0m15.979s 00:29:31.946 08:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:31.946 08:28:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.946 ************************************ 00:29:31.946 END TEST nvmf_auth_host 00:29:31.946 ************************************ 00:29:31.946 08:28:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:29:31.946 08:28:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:31.946 08:28:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:31.946 08:28:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:31.946 08:28:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.946 ************************************ 00:29:31.946 START TEST nvmf_digest 00:29:31.946 ************************************ 00:29:31.946 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:31.946 * Looking for test storage... 00:29:31.946 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:31.946 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:31.946 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:29:31.946 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:31.946 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:31.946 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:31.946 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:31.946 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:31.946 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:29:31.946 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:29:31.946 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:29:31.946 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:29:31.946 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:29:31.946 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:29:31.946 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:29:31.946 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:31.946 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:29:31.946 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:29:31.946 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:31.946 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:31.946 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:29:31.946 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:29:31.946 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:31.946 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:29:31.946 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:29:31.946 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:29:31.946 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:29:31.946 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:32.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.208 --rc genhtml_branch_coverage=1 00:29:32.208 --rc genhtml_function_coverage=1 00:29:32.208 --rc genhtml_legend=1 00:29:32.208 --rc geninfo_all_blocks=1 00:29:32.208 --rc geninfo_unexecuted_blocks=1 00:29:32.208 00:29:32.208 ' 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:32.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.208 --rc genhtml_branch_coverage=1 00:29:32.208 --rc genhtml_function_coverage=1 00:29:32.208 --rc genhtml_legend=1 00:29:32.208 --rc geninfo_all_blocks=1 00:29:32.208 --rc geninfo_unexecuted_blocks=1 00:29:32.208 00:29:32.208 ' 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:32.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.208 --rc genhtml_branch_coverage=1 00:29:32.208 --rc genhtml_function_coverage=1 00:29:32.208 --rc genhtml_legend=1 00:29:32.208 --rc geninfo_all_blocks=1 00:29:32.208 --rc geninfo_unexecuted_blocks=1 00:29:32.208 00:29:32.208 ' 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:32.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:32.208 --rc genhtml_branch_coverage=1 00:29:32.208 --rc genhtml_function_coverage=1 00:29:32.208 --rc genhtml_legend=1 00:29:32.208 --rc geninfo_all_blocks=1 00:29:32.208 --rc geninfo_unexecuted_blocks=1 00:29:32.208 00:29:32.208 ' 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.208 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:29:32.209 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:32.209 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:32.209 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:32.209 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:32.209 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:32.209 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:32.209 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:32.209 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:32.209 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:32.209 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:32.209 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:32.209 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:32.209 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:29:32.209 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:29:32.209 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:29:32.209 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:32.209 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:32.209 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:32.209 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:32.209 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:32.209 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.209 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:32.209 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.209 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:32.209 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:32.209 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:29:32.209 08:28:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:40.344 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:40.344 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:40.344 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:40.344 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:40.345 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:40.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:40.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:29:40.345 00:29:40.345 --- 10.0.0.2 ping statistics --- 00:29:40.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:40.345 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:40.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:40.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:29:40.345 00:29:40.345 --- 10.0.0.1 ping statistics --- 00:29:40.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:40.345 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:40.345 ************************************ 00:29:40.345 START TEST nvmf_digest_clean 00:29:40.345 ************************************ 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2142579 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2142579 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2142579 ']' 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:40.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:40.345 08:28:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:40.345 [2024-11-28 08:28:36.939250] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:29:40.345 [2024-11-28 08:28:36.939317] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:40.345 [2024-11-28 08:28:37.042281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.345 [2024-11-28 08:28:37.093506] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:40.345 [2024-11-28 08:28:37.093561] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:40.346 [2024-11-28 08:28:37.093570] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:40.346 [2024-11-28 08:28:37.093577] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:40.346 [2024-11-28 08:28:37.093585] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:40.346 [2024-11-28 08:28:37.094445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.606 08:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:40.606 08:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:40.606 08:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:40.606 08:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:40.606 08:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:40.606 08:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:40.606 08:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:29:40.606 08:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:29:40.606 08:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:29:40.606 08:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.606 08:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:40.867 null0 00:29:40.867 [2024-11-28 08:28:37.906279] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:40.867 [2024-11-28 08:28:37.930575] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:40.867 08:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.867 08:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:29:40.867 08:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:40.867 08:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:40.867 08:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:40.867 08:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:40.867 08:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:40.867 08:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:40.867 08:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2142844 00:29:40.867 08:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2142844 /var/tmp/bperf.sock 00:29:40.867 08:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2142844 ']' 00:29:40.867 08:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:40.867 08:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:40.867 08:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:40.867 08:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:40.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:40.867 08:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:40.867 08:28:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:40.867 [2024-11-28 08:28:37.998383] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:29:40.867 [2024-11-28 08:28:37.998449] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2142844 ] 00:29:40.867 [2024-11-28 08:28:38.092938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.867 [2024-11-28 08:28:38.144989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.809 08:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:41.809 08:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:41.809 08:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:41.809 08:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:41.809 08:28:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:42.070 08:28:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:42.070 08:28:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:42.331 nvme0n1 00:29:42.331 08:28:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:42.331 08:28:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:42.331 Running I/O for 2 seconds... 00:29:44.216 18073.00 IOPS, 70.60 MiB/s [2024-11-28T07:28:41.505Z] 19138.50 IOPS, 74.76 MiB/s 00:29:44.216 Latency(us) 00:29:44.216 [2024-11-28T07:28:41.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.216 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:44.217 nvme0n1 : 2.00 19180.09 74.92 0.00 0.00 6666.67 2402.99 22500.69 00:29:44.217 [2024-11-28T07:28:41.506Z] =================================================================================================================== 00:29:44.217 [2024-11-28T07:28:41.506Z] Total : 19180.09 74.92 0.00 0.00 6666.67 2402.99 22500.69 00:29:44.477 { 00:29:44.477 "results": [ 00:29:44.477 { 00:29:44.477 "job": "nvme0n1", 00:29:44.477 "core_mask": "0x2", 00:29:44.477 "workload": "randread", 00:29:44.477 "status": "finished", 00:29:44.477 "queue_depth": 128, 00:29:44.477 "io_size": 4096, 00:29:44.477 "runtime": 2.004318, 00:29:44.477 "iops": 19180.09018528996, 00:29:44.477 "mibps": 74.9222272862889, 00:29:44.477 "io_failed": 0, 00:29:44.477 "io_timeout": 0, 00:29:44.477 "avg_latency_us": 6666.66752126525, 00:29:44.477 "min_latency_us": 2402.9866666666667, 00:29:44.477 "max_latency_us": 22500.693333333333 00:29:44.477 } 00:29:44.477 ], 00:29:44.477 "core_count": 1 00:29:44.477 } 00:29:44.477 08:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:44.477 08:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:44.477 08:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:44.477 08:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:44.477 | select(.opcode=="crc32c") 00:29:44.477 | "\(.module_name) \(.executed)"' 00:29:44.477 08:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:44.477 08:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:44.477 08:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:44.477 08:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:44.477 08:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:44.477 08:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2142844 00:29:44.477 08:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2142844 ']' 00:29:44.477 08:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2142844 00:29:44.477 08:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:44.477 08:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:44.477 08:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2142844 00:29:44.738 08:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:44.738 08:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:44.738 08:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2142844' 00:29:44.738 killing process with pid 2142844 00:29:44.738 08:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2142844 00:29:44.738 Received shutdown signal, test time was about 2.000000 seconds 00:29:44.738 00:29:44.738 Latency(us) 00:29:44.738 [2024-11-28T07:28:42.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.738 [2024-11-28T07:28:42.027Z] =================================================================================================================== 00:29:44.738 [2024-11-28T07:28:42.027Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:44.738 08:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2142844 00:29:44.738 08:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:44.738 08:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:44.738 08:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:44.738 08:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:44.738 08:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:44.738 08:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:44.738 08:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:44.738 08:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2143527 00:29:44.738 08:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2143527 /var/tmp/bperf.sock 00:29:44.738 08:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2143527 ']' 00:29:44.738 08:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:44.738 08:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:44.738 08:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:44.738 08:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:44.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:44.738 08:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:44.738 08:28:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:44.738 [2024-11-28 08:28:41.941509] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:29:44.738 [2024-11-28 08:28:41.941562] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2143527 ] 00:29:44.738 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:44.738 Zero copy mechanism will not be used. 00:29:44.999 [2024-11-28 08:28:42.026019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.999 [2024-11-28 08:28:42.054171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:45.571 08:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:45.571 08:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:45.571 08:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:45.571 08:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:45.571 08:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:45.831 08:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:45.831 08:28:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:46.092 nvme0n1 00:29:46.092 08:28:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:46.092 08:28:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:46.353 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:46.353 Zero copy mechanism will not be used. 00:29:46.353 Running I/O for 2 seconds... 00:29:48.232 4189.00 IOPS, 523.62 MiB/s [2024-11-28T07:28:45.521Z] 3901.50 IOPS, 487.69 MiB/s 00:29:48.232 Latency(us) 00:29:48.232 [2024-11-28T07:28:45.521Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:48.232 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:48.232 nvme0n1 : 2.00 3903.88 487.99 0.00 0.00 4096.08 856.75 10758.83 00:29:48.232 [2024-11-28T07:28:45.521Z] =================================================================================================================== 00:29:48.232 [2024-11-28T07:28:45.521Z] Total : 3903.88 487.99 0.00 0.00 4096.08 856.75 10758.83 00:29:48.232 { 00:29:48.232 "results": [ 00:29:48.232 { 00:29:48.232 "job": "nvme0n1", 00:29:48.232 "core_mask": "0x2", 00:29:48.232 "workload": "randread", 00:29:48.232 "status": "finished", 00:29:48.232 "queue_depth": 16, 00:29:48.232 "io_size": 131072, 00:29:48.233 "runtime": 2.002878, 00:29:48.233 "iops": 3903.8823133510878, 00:29:48.233 "mibps": 487.98528916888597, 00:29:48.233 "io_failed": 0, 00:29:48.233 "io_timeout": 0, 00:29:48.233 "avg_latency_us": 4096.075958562476, 00:29:48.233 "min_latency_us": 856.7466666666667, 00:29:48.233 "max_latency_us": 10758.826666666666 00:29:48.233 } 00:29:48.233 ], 00:29:48.233 "core_count": 1 00:29:48.233 } 00:29:48.233 08:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:48.233 08:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:48.233 08:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:48.233 08:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:48.233 | select(.opcode=="crc32c") 00:29:48.233 | "\(.module_name) \(.executed)"' 00:29:48.233 08:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:48.493 08:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:48.493 08:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:48.493 08:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:48.493 08:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:48.493 08:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2143527 00:29:48.493 08:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2143527 ']' 00:29:48.493 08:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2143527 00:29:48.493 08:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:48.493 08:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:48.493 08:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2143527 00:29:48.493 08:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:48.493 08:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:48.493 08:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2143527' 00:29:48.493 killing process with pid 2143527 00:29:48.493 08:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2143527 00:29:48.493 Received shutdown signal, test time was about 2.000000 seconds 00:29:48.493 00:29:48.493 Latency(us) 00:29:48.493 [2024-11-28T07:28:45.782Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:48.493 [2024-11-28T07:28:45.782Z] =================================================================================================================== 00:29:48.493 [2024-11-28T07:28:45.782Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:48.493 08:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2143527 00:29:48.754 08:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:48.754 08:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:48.754 08:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:48.754 08:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:48.754 08:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:48.754 08:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:48.754 08:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:48.754 08:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2144310 00:29:48.754 08:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2144310 /var/tmp/bperf.sock 00:29:48.754 08:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2144310 ']' 00:29:48.754 08:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:48.754 08:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:48.754 08:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:48.754 08:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:48.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:48.754 08:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:48.754 08:28:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:48.754 [2024-11-28 08:28:45.891379] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:29:48.754 [2024-11-28 08:28:45.891437] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2144310 ] 00:29:48.754 [2024-11-28 08:28:45.972748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.754 [2024-11-28 08:28:46.002222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:49.695 08:28:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:49.695 08:28:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:49.695 08:28:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:49.695 08:28:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:49.695 08:28:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:49.695 08:28:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:49.695 08:28:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:50.266 nvme0n1 00:29:50.266 08:28:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:50.266 08:28:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:50.266 Running I/O for 2 seconds... 00:29:52.150 30187.00 IOPS, 117.92 MiB/s [2024-11-28T07:28:49.439Z] 30294.00 IOPS, 118.34 MiB/s 00:29:52.150 Latency(us) 00:29:52.150 [2024-11-28T07:28:49.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:52.150 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:52.150 nvme0n1 : 2.00 30317.66 118.43 0.00 0.00 4216.61 2157.23 11414.19 00:29:52.150 [2024-11-28T07:28:49.439Z] =================================================================================================================== 00:29:52.150 [2024-11-28T07:28:49.439Z] Total : 30317.66 118.43 0.00 0.00 4216.61 2157.23 11414.19 00:29:52.150 { 00:29:52.150 "results": [ 00:29:52.150 { 00:29:52.150 "job": "nvme0n1", 00:29:52.150 "core_mask": "0x2", 00:29:52.150 "workload": "randwrite", 00:29:52.150 "status": "finished", 00:29:52.150 "queue_depth": 128, 00:29:52.150 "io_size": 4096, 00:29:52.150 "runtime": 2.004706, 00:29:52.150 "iops": 30317.66254004328, 00:29:52.150 "mibps": 118.42836929704406, 00:29:52.150 "io_failed": 0, 00:29:52.150 "io_timeout": 0, 00:29:52.150 "avg_latency_us": 4216.60571741968, 00:29:52.150 "min_latency_us": 2157.2266666666665, 00:29:52.150 "max_latency_us": 11414.186666666666 00:29:52.150 } 00:29:52.150 ], 00:29:52.150 "core_count": 1 00:29:52.150 } 00:29:52.150 08:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:52.150 08:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:52.150 08:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:52.150 08:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:52.150 | select(.opcode=="crc32c") 00:29:52.150 | "\(.module_name) \(.executed)"' 00:29:52.150 08:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:52.411 08:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:52.411 08:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:52.411 08:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:52.411 08:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:52.411 08:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2144310 00:29:52.411 08:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2144310 ']' 00:29:52.411 08:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2144310 00:29:52.411 08:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:52.411 08:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:52.411 08:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2144310 00:29:52.411 08:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:52.411 08:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:52.411 08:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2144310' 00:29:52.411 killing process with pid 2144310 00:29:52.411 08:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2144310 00:29:52.411 Received shutdown signal, test time was about 2.000000 seconds 00:29:52.411 00:29:52.411 Latency(us) 00:29:52.411 [2024-11-28T07:28:49.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:52.411 [2024-11-28T07:28:49.700Z] =================================================================================================================== 00:29:52.411 [2024-11-28T07:28:49.700Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:52.411 08:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2144310 00:29:52.672 08:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:52.672 08:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:52.672 08:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:52.672 08:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:52.672 08:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:52.672 08:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:52.672 08:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:52.672 08:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2145160 00:29:52.672 08:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2145160 /var/tmp/bperf.sock 00:29:52.672 08:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2145160 ']' 00:29:52.672 08:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:52.672 08:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:52.672 08:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:52.672 08:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:52.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:52.672 08:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:52.672 08:28:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:52.672 [2024-11-28 08:28:49.786281] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:29:52.672 [2024-11-28 08:28:49.786338] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2145160 ] 00:29:52.672 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:52.672 Zero copy mechanism will not be used. 00:29:52.672 [2024-11-28 08:28:49.871207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.672 [2024-11-28 08:28:49.900684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:53.613 08:28:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:53.613 08:28:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:53.613 08:28:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:53.613 08:28:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:53.614 08:28:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:53.614 08:28:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:53.614 08:28:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:53.875 nvme0n1 00:29:53.875 08:28:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:53.875 08:28:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:53.875 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:53.875 Zero copy mechanism will not be used. 00:29:53.875 Running I/O for 2 seconds... 00:29:55.928 5897.00 IOPS, 737.12 MiB/s [2024-11-28T07:28:53.217Z] 5007.50 IOPS, 625.94 MiB/s 00:29:55.928 Latency(us) 00:29:55.928 [2024-11-28T07:28:53.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:55.928 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:55.928 nvme0n1 : 2.00 5006.04 625.75 0.00 0.00 3191.48 1249.28 7208.96 00:29:55.928 [2024-11-28T07:28:53.217Z] =================================================================================================================== 00:29:55.928 [2024-11-28T07:28:53.217Z] Total : 5006.04 625.75 0.00 0.00 3191.48 1249.28 7208.96 00:29:55.928 { 00:29:55.928 "results": [ 00:29:55.928 { 00:29:55.928 "job": "nvme0n1", 00:29:55.928 "core_mask": "0x2", 00:29:55.928 "workload": "randwrite", 00:29:55.928 "status": "finished", 00:29:55.928 "queue_depth": 16, 00:29:55.928 "io_size": 131072, 00:29:55.928 "runtime": 2.004579, 00:29:55.928 "iops": 5006.038674454836, 00:29:55.928 "mibps": 625.7548343068545, 00:29:55.928 "io_failed": 0, 00:29:55.928 "io_timeout": 0, 00:29:55.928 "avg_latency_us": 3191.476530808836, 00:29:55.928 "min_latency_us": 1249.28, 00:29:55.928 "max_latency_us": 7208.96 00:29:55.928 } 00:29:55.928 ], 00:29:55.928 "core_count": 1 00:29:55.928 } 00:29:55.928 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:55.928 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:55.928 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:55.928 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:55.928 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:55.928 | select(.opcode=="crc32c") 00:29:55.928 | "\(.module_name) \(.executed)"' 00:29:56.217 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:56.217 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:56.217 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:56.217 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:56.217 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2145160 00:29:56.217 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2145160 ']' 00:29:56.217 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2145160 00:29:56.217 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:56.217 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:56.217 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2145160 00:29:56.217 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:56.217 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:56.217 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2145160' 00:29:56.217 killing process with pid 2145160 00:29:56.217 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2145160 00:29:56.217 Received shutdown signal, test time was about 2.000000 seconds 00:29:56.217 00:29:56.217 Latency(us) 00:29:56.217 [2024-11-28T07:28:53.506Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:56.217 [2024-11-28T07:28:53.506Z] =================================================================================================================== 00:29:56.218 [2024-11-28T07:28:53.507Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:56.218 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2145160 00:29:56.480 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2142579 00:29:56.480 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2142579 ']' 00:29:56.480 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2142579 00:29:56.480 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:56.480 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:56.480 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2142579 00:29:56.480 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:56.480 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:56.480 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2142579' 00:29:56.480 killing process with pid 2142579 00:29:56.480 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2142579 00:29:56.480 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2142579 00:29:56.480 00:29:56.480 real 0m16.845s 00:29:56.480 user 0m33.369s 00:29:56.480 sys 0m3.714s 00:29:56.480 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:56.480 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:56.480 ************************************ 00:29:56.480 END TEST nvmf_digest_clean 00:29:56.480 ************************************ 00:29:56.480 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:56.480 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:56.480 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:56.480 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:56.741 ************************************ 00:29:56.741 START TEST nvmf_digest_error 00:29:56.741 ************************************ 00:29:56.741 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:29:56.741 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:56.741 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:56.741 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:56.741 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:56.741 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2145934 00:29:56.741 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2145934 00:29:56.741 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:56.741 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2145934 ']' 00:29:56.741 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:56.741 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:56.741 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:56.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:56.741 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:56.741 08:28:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:56.741 [2024-11-28 08:28:53.859196] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:29:56.741 [2024-11-28 08:28:53.859253] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:56.741 [2024-11-28 08:28:53.952311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.741 [2024-11-28 08:28:53.985863] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:56.741 [2024-11-28 08:28:53.985908] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:56.741 [2024-11-28 08:28:53.985914] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:56.741 [2024-11-28 08:28:53.985919] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:56.741 [2024-11-28 08:28:53.985924] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:56.741 [2024-11-28 08:28:53.986414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:57.683 08:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:57.683 08:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:57.683 08:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:57.683 08:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:57.683 08:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:57.683 08:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:57.683 08:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:57.683 08:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.683 08:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:57.683 [2024-11-28 08:28:54.692365] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:57.683 08:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.683 08:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:57.683 08:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:57.683 08:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.683 08:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:57.683 null0 00:29:57.683 [2024-11-28 08:28:54.771190] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:57.683 [2024-11-28 08:28:54.795400] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:57.683 08:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.683 08:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:57.683 08:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:57.683 08:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:57.683 08:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:57.683 08:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:57.683 08:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2146135 00:29:57.683 08:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2146135 /var/tmp/bperf.sock 00:29:57.683 08:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2146135 ']' 00:29:57.683 08:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:57.683 08:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:57.683 08:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:57.683 08:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:57.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:57.683 08:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:57.683 08:28:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:57.683 [2024-11-28 08:28:54.850693] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:29:57.683 [2024-11-28 08:28:54.850740] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2146135 ] 00:29:57.683 [2024-11-28 08:28:54.933498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:57.683 [2024-11-28 08:28:54.963488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:58.625 08:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:58.625 08:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:58.625 08:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:58.625 08:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:58.625 08:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:58.625 08:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.625 08:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:58.625 08:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.625 08:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:58.625 08:28:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:58.885 nvme0n1 00:29:58.885 08:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:58.885 08:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.885 08:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:58.885 08:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.885 08:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:58.885 08:28:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:59.146 Running I/O for 2 seconds... 00:29:59.146 [2024-11-28 08:28:56.199519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.146 [2024-11-28 08:28:56.199557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.146 [2024-11-28 08:28:56.199566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.146 [2024-11-28 08:28:56.211245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.146 [2024-11-28 08:28:56.211266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.146 [2024-11-28 08:28:56.211274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.146 [2024-11-28 08:28:56.220100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.146 [2024-11-28 08:28:56.220120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.146 [2024-11-28 08:28:56.220127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.146 [2024-11-28 08:28:56.228951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.146 [2024-11-28 08:28:56.228970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.146 [2024-11-28 08:28:56.228978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.146 [2024-11-28 08:28:56.238123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.146 [2024-11-28 08:28:56.238141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.146 [2024-11-28 08:28:56.238148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.146 [2024-11-28 08:28:56.247307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.146 [2024-11-28 08:28:56.247325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.146 [2024-11-28 08:28:56.247332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.146 [2024-11-28 08:28:56.255366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.146 [2024-11-28 08:28:56.255385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.146 [2024-11-28 08:28:56.255391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.146 [2024-11-28 08:28:56.264237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.146 [2024-11-28 08:28:56.264256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.147 [2024-11-28 08:28:56.264263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.147 [2024-11-28 08:28:56.274006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.147 [2024-11-28 08:28:56.274025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.147 [2024-11-28 08:28:56.274032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.147 [2024-11-28 08:28:56.282850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.147 [2024-11-28 08:28:56.282868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.147 [2024-11-28 08:28:56.282875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.147 [2024-11-28 08:28:56.292148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.147 [2024-11-28 08:28:56.292171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.147 [2024-11-28 08:28:56.292177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.147 [2024-11-28 08:28:56.301297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.147 [2024-11-28 08:28:56.301316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.147 [2024-11-28 08:28:56.301322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.147 [2024-11-28 08:28:56.310888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.147 [2024-11-28 08:28:56.310906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.147 [2024-11-28 08:28:56.310913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.147 [2024-11-28 08:28:56.319521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.147 [2024-11-28 08:28:56.319539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.147 [2024-11-28 08:28:56.319545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.147 [2024-11-28 08:28:56.328823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.147 [2024-11-28 08:28:56.328840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.147 [2024-11-28 08:28:56.328847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.147 [2024-11-28 08:28:56.337491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.147 [2024-11-28 08:28:56.337509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.147 [2024-11-28 08:28:56.337516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.147 [2024-11-28 08:28:56.347146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.147 [2024-11-28 08:28:56.347169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.147 [2024-11-28 08:28:56.347176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.147 [2024-11-28 08:28:56.356112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.147 [2024-11-28 08:28:56.356130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.147 [2024-11-28 08:28:56.356140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.147 [2024-11-28 08:28:56.366506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.147 [2024-11-28 08:28:56.366524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.147 [2024-11-28 08:28:56.366531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.147 [2024-11-28 08:28:56.376209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.147 [2024-11-28 08:28:56.376226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.147 [2024-11-28 08:28:56.376233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.147 [2024-11-28 08:28:56.384984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.147 [2024-11-28 08:28:56.385002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.147 [2024-11-28 08:28:56.385009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.147 [2024-11-28 08:28:56.394675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.147 [2024-11-28 08:28:56.394694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.147 [2024-11-28 08:28:56.394700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.147 [2024-11-28 08:28:56.403350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.147 [2024-11-28 08:28:56.403368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.147 [2024-11-28 08:28:56.403374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.147 [2024-11-28 08:28:56.412401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.147 [2024-11-28 08:28:56.412419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.147 [2024-11-28 08:28:56.412426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.147 [2024-11-28 08:28:56.420884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.147 [2024-11-28 08:28:56.420901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.147 [2024-11-28 08:28:56.420908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.147 [2024-11-28 08:28:56.431599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.147 [2024-11-28 08:28:56.431618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.147 [2024-11-28 08:28:56.431625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.409 [2024-11-28 08:28:56.440611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.409 [2024-11-28 08:28:56.440633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.409 [2024-11-28 08:28:56.440640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.409 [2024-11-28 08:28:56.450111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.409 [2024-11-28 08:28:56.450128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.409 [2024-11-28 08:28:56.450135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.409 [2024-11-28 08:28:56.458334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.409 [2024-11-28 08:28:56.458353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.409 [2024-11-28 08:28:56.458359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.409 [2024-11-28 08:28:56.467673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.409 [2024-11-28 08:28:56.467690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.409 [2024-11-28 08:28:56.467697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.409 [2024-11-28 08:28:56.476290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.409 [2024-11-28 08:28:56.476308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.409 [2024-11-28 08:28:56.476314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.409 [2024-11-28 08:28:56.485094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.409 [2024-11-28 08:28:56.485112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.409 [2024-11-28 08:28:56.485118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.409 [2024-11-28 08:28:56.494371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.409 [2024-11-28 08:28:56.494389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.409 [2024-11-28 08:28:56.494396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.409 [2024-11-28 08:28:56.502938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.409 [2024-11-28 08:28:56.502956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.409 [2024-11-28 08:28:56.502962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.409 [2024-11-28 08:28:56.511994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.409 [2024-11-28 08:28:56.512012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.409 [2024-11-28 08:28:56.512021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.410 [2024-11-28 08:28:56.520575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.410 [2024-11-28 08:28:56.520593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.410 [2024-11-28 08:28:56.520600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.410 [2024-11-28 08:28:56.529746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.410 [2024-11-28 08:28:56.529764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.410 [2024-11-28 08:28:56.529771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.410 [2024-11-28 08:28:56.538011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.410 [2024-11-28 08:28:56.538029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.410 [2024-11-28 08:28:56.538036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.410 [2024-11-28 08:28:56.547918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.410 [2024-11-28 08:28:56.547936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.410 [2024-11-28 08:28:56.547943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.410 [2024-11-28 08:28:56.557474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.410 [2024-11-28 08:28:56.557491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.410 [2024-11-28 08:28:56.557498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.410 [2024-11-28 08:28:56.565479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.410 [2024-11-28 08:28:56.565496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.410 [2024-11-28 08:28:56.565502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.410 [2024-11-28 08:28:56.574180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.410 [2024-11-28 08:28:56.574198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.410 [2024-11-28 08:28:56.574204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.410 [2024-11-28 08:28:56.584490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.410 [2024-11-28 08:28:56.584509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.410 [2024-11-28 08:28:56.584515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.410 [2024-11-28 08:28:56.597001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.410 [2024-11-28 08:28:56.597023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.410 [2024-11-28 08:28:56.597030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.410 [2024-11-28 08:28:56.604977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.410 [2024-11-28 08:28:56.604995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.410 [2024-11-28 08:28:56.605002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.410 [2024-11-28 08:28:56.615326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.410 [2024-11-28 08:28:56.615345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.410 [2024-11-28 08:28:56.615352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.410 [2024-11-28 08:28:56.625468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.410 [2024-11-28 08:28:56.625486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.410 [2024-11-28 08:28:56.625493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.410 [2024-11-28 08:28:56.633191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.410 [2024-11-28 08:28:56.633209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.410 [2024-11-28 08:28:56.633216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.410 [2024-11-28 08:28:56.643043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.410 [2024-11-28 08:28:56.643061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.410 [2024-11-28 08:28:56.643068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.410 [2024-11-28 08:28:56.651890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.410 [2024-11-28 08:28:56.651908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.410 [2024-11-28 08:28:56.651914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.410 [2024-11-28 08:28:56.660642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.410 [2024-11-28 08:28:56.660660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.410 [2024-11-28 08:28:56.660667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.410 [2024-11-28 08:28:56.669890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.410 [2024-11-28 08:28:56.669908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.410 [2024-11-28 08:28:56.669914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.410 [2024-11-28 08:28:56.679037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.410 [2024-11-28 08:28:56.679055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.410 [2024-11-28 08:28:56.679062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.410 [2024-11-28 08:28:56.687641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.410 [2024-11-28 08:28:56.687659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.410 [2024-11-28 08:28:56.687666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.672 [2024-11-28 08:28:56.698261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.672 [2024-11-28 08:28:56.698279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.672 [2024-11-28 08:28:56.698286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.672 [2024-11-28 08:28:56.710372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.672 [2024-11-28 08:28:56.710391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.672 [2024-11-28 08:28:56.710397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.672 [2024-11-28 08:28:56.722525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.672 [2024-11-28 08:28:56.722545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.672 [2024-11-28 08:28:56.722551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.672 [2024-11-28 08:28:56.731486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.672 [2024-11-28 08:28:56.731504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.672 [2024-11-28 08:28:56.731511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.672 [2024-11-28 08:28:56.739734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.672 [2024-11-28 08:28:56.739752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.672 [2024-11-28 08:28:56.739758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.672 [2024-11-28 08:28:56.748933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.672 [2024-11-28 08:28:56.748951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.672 [2024-11-28 08:28:56.748958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.672 [2024-11-28 08:28:56.757956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.672 [2024-11-28 08:28:56.757974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.672 [2024-11-28 08:28:56.757984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.672 [2024-11-28 08:28:56.766992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.672 [2024-11-28 08:28:56.767010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:86 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.672 [2024-11-28 08:28:56.767017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.672 [2024-11-28 08:28:56.776245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.672 [2024-11-28 08:28:56.776263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.672 [2024-11-28 08:28:56.776270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.672 [2024-11-28 08:28:56.784711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.672 [2024-11-28 08:28:56.784729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.672 [2024-11-28 08:28:56.784736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.672 [2024-11-28 08:28:56.794027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.672 [2024-11-28 08:28:56.794045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.672 [2024-11-28 08:28:56.794051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.672 [2024-11-28 08:28:56.802378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.672 [2024-11-28 08:28:56.802395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.672 [2024-11-28 08:28:56.802402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.672 [2024-11-28 08:28:56.811911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.672 [2024-11-28 08:28:56.811929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.672 [2024-11-28 08:28:56.811936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.672 [2024-11-28 08:28:56.821802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.672 [2024-11-28 08:28:56.821820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.673 [2024-11-28 08:28:56.821827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.673 [2024-11-28 08:28:56.829525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.673 [2024-11-28 08:28:56.829543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.673 [2024-11-28 08:28:56.829550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.673 [2024-11-28 08:28:56.840006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.673 [2024-11-28 08:28:56.840027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.673 [2024-11-28 08:28:56.840034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.673 [2024-11-28 08:28:56.848935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.673 [2024-11-28 08:28:56.848953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.673 [2024-11-28 08:28:56.848960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.673 [2024-11-28 08:28:56.856786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.673 [2024-11-28 08:28:56.856804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.673 [2024-11-28 08:28:56.856811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.673 [2024-11-28 08:28:56.866611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.673 [2024-11-28 08:28:56.866629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.673 [2024-11-28 08:28:56.866636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.673 [2024-11-28 08:28:56.876292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.673 [2024-11-28 08:28:56.876310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.673 [2024-11-28 08:28:56.876316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.673 [2024-11-28 08:28:56.886048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.673 [2024-11-28 08:28:56.886066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.673 [2024-11-28 08:28:56.886073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.673 [2024-11-28 08:28:56.894874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.673 [2024-11-28 08:28:56.894891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.673 [2024-11-28 08:28:56.894898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.673 [2024-11-28 08:28:56.905540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.673 [2024-11-28 08:28:56.905558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.673 [2024-11-28 08:28:56.905565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.673 [2024-11-28 08:28:56.917537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.673 [2024-11-28 08:28:56.917555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.673 [2024-11-28 08:28:56.917561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.673 [2024-11-28 08:28:56.927459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.673 [2024-11-28 08:28:56.927476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.673 [2024-11-28 08:28:56.927482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.673 [2024-11-28 08:28:56.935627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.673 [2024-11-28 08:28:56.935645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.673 [2024-11-28 08:28:56.935651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.673 [2024-11-28 08:28:56.944853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.673 [2024-11-28 08:28:56.944870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.673 [2024-11-28 08:28:56.944877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.673 [2024-11-28 08:28:56.954435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.673 [2024-11-28 08:28:56.954453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.673 [2024-11-28 08:28:56.954459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.934 [2024-11-28 08:28:56.962999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.934 [2024-11-28 08:28:56.963017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.934 [2024-11-28 08:28:56.963023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.935 [2024-11-28 08:28:56.971785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.935 [2024-11-28 08:28:56.971803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.935 [2024-11-28 08:28:56.971810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.935 [2024-11-28 08:28:56.981205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.935 [2024-11-28 08:28:56.981223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.935 [2024-11-28 08:28:56.981230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.935 [2024-11-28 08:28:56.989532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.935 [2024-11-28 08:28:56.989549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.935 [2024-11-28 08:28:56.989556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.935 [2024-11-28 08:28:57.000042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.935 [2024-11-28 08:28:57.000063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.935 [2024-11-28 08:28:57.000070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.935 [2024-11-28 08:28:57.009254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.935 [2024-11-28 08:28:57.009271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.935 [2024-11-28 08:28:57.009278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.935 [2024-11-28 08:28:57.016513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.935 [2024-11-28 08:28:57.016531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.935 [2024-11-28 08:28:57.016537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.935 [2024-11-28 08:28:57.026316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.935 [2024-11-28 08:28:57.026334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.935 [2024-11-28 08:28:57.026340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.935 [2024-11-28 08:28:57.035832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.935 [2024-11-28 08:28:57.035850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.935 [2024-11-28 08:28:57.035856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.935 [2024-11-28 08:28:57.045184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.935 [2024-11-28 08:28:57.045202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.935 [2024-11-28 08:28:57.045208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.935 [2024-11-28 08:28:57.052892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.935 [2024-11-28 08:28:57.052910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.935 [2024-11-28 08:28:57.052917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.935 [2024-11-28 08:28:57.061995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.935 [2024-11-28 08:28:57.062013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.935 [2024-11-28 08:28:57.062020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.935 [2024-11-28 08:28:57.071206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.935 [2024-11-28 08:28:57.071223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.935 [2024-11-28 08:28:57.071230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.935 [2024-11-28 08:28:57.080733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.935 [2024-11-28 08:28:57.080750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.935 [2024-11-28 08:28:57.080756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.935 [2024-11-28 08:28:57.090445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.935 [2024-11-28 08:28:57.090462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.935 [2024-11-28 08:28:57.090469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.935 [2024-11-28 08:28:57.099756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.935 [2024-11-28 08:28:57.099774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.935 [2024-11-28 08:28:57.099781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.935 [2024-11-28 08:28:57.108855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.935 [2024-11-28 08:28:57.108872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.935 [2024-11-28 08:28:57.108879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.935 [2024-11-28 08:28:57.118093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.935 [2024-11-28 08:28:57.118110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.935 [2024-11-28 08:28:57.118117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.935 [2024-11-28 08:28:57.126664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.935 [2024-11-28 08:28:57.126681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.935 [2024-11-28 08:28:57.126687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.935 [2024-11-28 08:28:57.135429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.935 [2024-11-28 08:28:57.135446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.935 [2024-11-28 08:28:57.135452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.935 [2024-11-28 08:28:57.144511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.935 [2024-11-28 08:28:57.144529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.935 [2024-11-28 08:28:57.144535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.935 [2024-11-28 08:28:57.152892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.935 [2024-11-28 08:28:57.152909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.935 [2024-11-28 08:28:57.152919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.935 [2024-11-28 08:28:57.161371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.936 [2024-11-28 08:28:57.161389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.936 [2024-11-28 08:28:57.161395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.936 [2024-11-28 08:28:57.171412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.936 [2024-11-28 08:28:57.171429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.936 [2024-11-28 08:28:57.171436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.936 [2024-11-28 08:28:57.180387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.936 [2024-11-28 08:28:57.180405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.936 [2024-11-28 08:28:57.180411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.936 27290.00 IOPS, 106.60 MiB/s [2024-11-28T07:28:57.225Z] [2024-11-28 08:28:57.189675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.936 [2024-11-28 08:28:57.189693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.936 [2024-11-28 08:28:57.189699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.936 [2024-11-28 08:28:57.200062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.936 [2024-11-28 08:28:57.200080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.936 [2024-11-28 08:28:57.200087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.936 [2024-11-28 08:28:57.209476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.936 [2024-11-28 08:28:57.209493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.936 [2024-11-28 08:28:57.209500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.936 [2024-11-28 08:28:57.219178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:29:59.936 [2024-11-28 08:28:57.219195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.936 [2024-11-28 08:28:57.219202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.197 [2024-11-28 08:28:57.228342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.197 [2024-11-28 08:28:57.228360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.197 [2024-11-28 08:28:57.228366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.197 [2024-11-28 08:28:57.236652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.197 [2024-11-28 08:28:57.236673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.197 [2024-11-28 08:28:57.236680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.197 [2024-11-28 08:28:57.245621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.197 [2024-11-28 08:28:57.245639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.197 [2024-11-28 08:28:57.245645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.197 [2024-11-28 08:28:57.256258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.198 [2024-11-28 08:28:57.256276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.198 [2024-11-28 08:28:57.256282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.198 [2024-11-28 08:28:57.265764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.198 [2024-11-28 08:28:57.265782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.198 [2024-11-28 08:28:57.265788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.198 [2024-11-28 08:28:57.274859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.198 [2024-11-28 08:28:57.274876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.198 [2024-11-28 08:28:57.274882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.198 [2024-11-28 08:28:57.283551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.198 [2024-11-28 08:28:57.283569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.198 [2024-11-28 08:28:57.283575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.198 [2024-11-28 08:28:57.292487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.198 [2024-11-28 08:28:57.292505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.198 [2024-11-28 08:28:57.292511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.198 [2024-11-28 08:28:57.301048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.198 [2024-11-28 08:28:57.301065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.198 [2024-11-28 08:28:57.301071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.198 [2024-11-28 08:28:57.310959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.198 [2024-11-28 08:28:57.310976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.198 [2024-11-28 08:28:57.310982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.198 [2024-11-28 08:28:57.320590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.198 [2024-11-28 08:28:57.320608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.198 [2024-11-28 08:28:57.320615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.198 [2024-11-28 08:28:57.328607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.198 [2024-11-28 08:28:57.328625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.198 [2024-11-28 08:28:57.328631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.198 [2024-11-28 08:28:57.338103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.198 [2024-11-28 08:28:57.338121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.198 [2024-11-28 08:28:57.338127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.198 [2024-11-28 08:28:57.347533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.198 [2024-11-28 08:28:57.347551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.198 [2024-11-28 08:28:57.347557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.198 [2024-11-28 08:28:57.355477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.198 [2024-11-28 08:28:57.355494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.198 [2024-11-28 08:28:57.355501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.198 [2024-11-28 08:28:57.364960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.198 [2024-11-28 08:28:57.364977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.198 [2024-11-28 08:28:57.364984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.198 [2024-11-28 08:28:57.373614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.198 [2024-11-28 08:28:57.373632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.198 [2024-11-28 08:28:57.373639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.198 [2024-11-28 08:28:57.383215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.198 [2024-11-28 08:28:57.383233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.198 [2024-11-28 08:28:57.383239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.198 [2024-11-28 08:28:57.391631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.198 [2024-11-28 08:28:57.391650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.198 [2024-11-28 08:28:57.391657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.198 [2024-11-28 08:28:57.400855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.198 [2024-11-28 08:28:57.400873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.198 [2024-11-28 08:28:57.400879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.198 [2024-11-28 08:28:57.410017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.198 [2024-11-28 08:28:57.410035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.198 [2024-11-28 08:28:57.410041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.198 [2024-11-28 08:28:57.419133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.198 [2024-11-28 08:28:57.419150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.198 [2024-11-28 08:28:57.419157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.198 [2024-11-28 08:28:57.426889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.198 [2024-11-28 08:28:57.426906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.198 [2024-11-28 08:28:57.426913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.198 [2024-11-28 08:28:57.435926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.198 [2024-11-28 08:28:57.435943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.198 [2024-11-28 08:28:57.435950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.198 [2024-11-28 08:28:57.446161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.198 [2024-11-28 08:28:57.446179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.198 [2024-11-28 08:28:57.446185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.198 [2024-11-28 08:28:57.454518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.198 [2024-11-28 08:28:57.454535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.198 [2024-11-28 08:28:57.454542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.198 [2024-11-28 08:28:57.462846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.198 [2024-11-28 08:28:57.462864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.198 [2024-11-28 08:28:57.462870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.198 [2024-11-28 08:28:57.471881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.198 [2024-11-28 08:28:57.471899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.198 [2024-11-28 08:28:57.471905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.198 [2024-11-28 08:28:57.481220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.198 [2024-11-28 08:28:57.481238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.198 [2024-11-28 08:28:57.481245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.460 [2024-11-28 08:28:57.490611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.460 [2024-11-28 08:28:57.490630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.460 [2024-11-28 08:28:57.490636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.460 [2024-11-28 08:28:57.500057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.460 [2024-11-28 08:28:57.500074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.460 [2024-11-28 08:28:57.500081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.460 [2024-11-28 08:28:57.508356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.460 [2024-11-28 08:28:57.508373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.460 [2024-11-28 08:28:57.508380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.460 [2024-11-28 08:28:57.517354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.460 [2024-11-28 08:28:57.517371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.460 [2024-11-28 08:28:57.517378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.460 [2024-11-28 08:28:57.525859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.460 [2024-11-28 08:28:57.525876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.460 [2024-11-28 08:28:57.525883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.460 [2024-11-28 08:28:57.535167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.460 [2024-11-28 08:28:57.535184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.460 [2024-11-28 08:28:57.535191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.460 [2024-11-28 08:28:57.544340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.460 [2024-11-28 08:28:57.544357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.460 [2024-11-28 08:28:57.544368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.460 [2024-11-28 08:28:57.552354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.460 [2024-11-28 08:28:57.552371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.460 [2024-11-28 08:28:57.552378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.460 [2024-11-28 08:28:57.561611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.460 [2024-11-28 08:28:57.561628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.460 [2024-11-28 08:28:57.561635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.460 [2024-11-28 08:28:57.570698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.460 [2024-11-28 08:28:57.570716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.460 [2024-11-28 08:28:57.570723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.460 [2024-11-28 08:28:57.580200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.460 [2024-11-28 08:28:57.580218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.460 [2024-11-28 08:28:57.580225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.460 [2024-11-28 08:28:57.589023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.460 [2024-11-28 08:28:57.589041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.460 [2024-11-28 08:28:57.589047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.460 [2024-11-28 08:28:57.598283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.460 [2024-11-28 08:28:57.598300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.460 [2024-11-28 08:28:57.598306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.460 [2024-11-28 08:28:57.606558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.460 [2024-11-28 08:28:57.606575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.460 [2024-11-28 08:28:57.606582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.460 [2024-11-28 08:28:57.617222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.460 [2024-11-28 08:28:57.617240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.461 [2024-11-28 08:28:57.617246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.461 [2024-11-28 08:28:57.624725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.461 [2024-11-28 08:28:57.624746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.461 [2024-11-28 08:28:57.624752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.461 [2024-11-28 08:28:57.633944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.461 [2024-11-28 08:28:57.633961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.461 [2024-11-28 08:28:57.633968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.461 [2024-11-28 08:28:57.643362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.461 [2024-11-28 08:28:57.643379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.461 [2024-11-28 08:28:57.643386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.461 [2024-11-28 08:28:57.652666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.461 [2024-11-28 08:28:57.652683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.461 [2024-11-28 08:28:57.652690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.461 [2024-11-28 08:28:57.661891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.461 [2024-11-28 08:28:57.661909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.461 [2024-11-28 08:28:57.661916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.461 [2024-11-28 08:28:57.672176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.461 [2024-11-28 08:28:57.672194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.461 [2024-11-28 08:28:57.672200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.461 [2024-11-28 08:28:57.683574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.461 [2024-11-28 08:28:57.683591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.461 [2024-11-28 08:28:57.683597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.461 [2024-11-28 08:28:57.691104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.461 [2024-11-28 08:28:57.691122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.461 [2024-11-28 08:28:57.691129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.461 [2024-11-28 08:28:57.700345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.461 [2024-11-28 08:28:57.700363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.461 [2024-11-28 08:28:57.700369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.461 [2024-11-28 08:28:57.710036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.461 [2024-11-28 08:28:57.710054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.461 [2024-11-28 08:28:57.710060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.461 [2024-11-28 08:28:57.719742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.461 [2024-11-28 08:28:57.719759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.461 [2024-11-28 08:28:57.719765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.461 [2024-11-28 08:28:57.727473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.461 [2024-11-28 08:28:57.727490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.461 [2024-11-28 08:28:57.727496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.461 [2024-11-28 08:28:57.736732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.461 [2024-11-28 08:28:57.736750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.461 [2024-11-28 08:28:57.736757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.461 [2024-11-28 08:28:57.745803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.461 [2024-11-28 08:28:57.745820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.461 [2024-11-28 08:28:57.745826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.722 [2024-11-28 08:28:57.755109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.722 [2024-11-28 08:28:57.755126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.722 [2024-11-28 08:28:57.755133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.722 [2024-11-28 08:28:57.764075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.722 [2024-11-28 08:28:57.764093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.722 [2024-11-28 08:28:57.764099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.722 [2024-11-28 08:28:57.776717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.723 [2024-11-28 08:28:57.776735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.723 [2024-11-28 08:28:57.776741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.723 [2024-11-28 08:28:57.788118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.723 [2024-11-28 08:28:57.788137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.723 [2024-11-28 08:28:57.788147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.723 [2024-11-28 08:28:57.798307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.723 [2024-11-28 08:28:57.798325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.723 [2024-11-28 08:28:57.798332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.723 [2024-11-28 08:28:57.807080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.723 [2024-11-28 08:28:57.807098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.723 [2024-11-28 08:28:57.807105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.723 [2024-11-28 08:28:57.816236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.723 [2024-11-28 08:28:57.816254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.723 [2024-11-28 08:28:57.816260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.723 [2024-11-28 08:28:57.827174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.723 [2024-11-28 08:28:57.827191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.723 [2024-11-28 08:28:57.827198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.723 [2024-11-28 08:28:57.835216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.723 [2024-11-28 08:28:57.835233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.723 [2024-11-28 08:28:57.835239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.723 [2024-11-28 08:28:57.844816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.723 [2024-11-28 08:28:57.844833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.723 [2024-11-28 08:28:57.844839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.723 [2024-11-28 08:28:57.852902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.723 [2024-11-28 08:28:57.852919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.723 [2024-11-28 08:28:57.852926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.723 [2024-11-28 08:28:57.861997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.723 [2024-11-28 08:28:57.862015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.723 [2024-11-28 08:28:57.862021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.723 [2024-11-28 08:28:57.873176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.723 [2024-11-28 08:28:57.873193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.723 [2024-11-28 08:28:57.873200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.723 [2024-11-28 08:28:57.882293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.723 [2024-11-28 08:28:57.882311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.723 [2024-11-28 08:28:57.882317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.723 [2024-11-28 08:28:57.891744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.723 [2024-11-28 08:28:57.891762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.723 [2024-11-28 08:28:57.891768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.723 [2024-11-28 08:28:57.900176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.723 [2024-11-28 08:28:57.900193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.723 [2024-11-28 08:28:57.900200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.723 [2024-11-28 08:28:57.908833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.723 [2024-11-28 08:28:57.908849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.723 [2024-11-28 08:28:57.908856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.723 [2024-11-28 08:28:57.918808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.723 [2024-11-28 08:28:57.918825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.723 [2024-11-28 08:28:57.918832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.723 [2024-11-28 08:28:57.926783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.723 [2024-11-28 08:28:57.926799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.723 [2024-11-28 08:28:57.926806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.723 [2024-11-28 08:28:57.936306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.723 [2024-11-28 08:28:57.936324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.723 [2024-11-28 08:28:57.936331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.723 [2024-11-28 08:28:57.945109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.723 [2024-11-28 08:28:57.945126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.723 [2024-11-28 08:28:57.945135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.723 [2024-11-28 08:28:57.954011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.723 [2024-11-28 08:28:57.954029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.723 [2024-11-28 08:28:57.954035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.723 [2024-11-28 08:28:57.963631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.723 [2024-11-28 08:28:57.963649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.723 [2024-11-28 08:28:57.963656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.723 [2024-11-28 08:28:57.972437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.723 [2024-11-28 08:28:57.972455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.723 [2024-11-28 08:28:57.972462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.723 [2024-11-28 08:28:57.982313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.723 [2024-11-28 08:28:57.982331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.723 [2024-11-28 08:28:57.982338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.723 [2024-11-28 08:28:57.992991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.723 [2024-11-28 08:28:57.993009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.723 [2024-11-28 08:28:57.993015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.723 [2024-11-28 08:28:58.001124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.723 [2024-11-28 08:28:58.001143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.723 [2024-11-28 08:28:58.001151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.985 [2024-11-28 08:28:58.010659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.985 [2024-11-28 08:28:58.010678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.985 [2024-11-28 08:28:58.010684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.985 [2024-11-28 08:28:58.020618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.985 [2024-11-28 08:28:58.020636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.985 [2024-11-28 08:28:58.020643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.985 [2024-11-28 08:28:58.029056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.985 [2024-11-28 08:28:58.029077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.985 [2024-11-28 08:28:58.029084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.985 [2024-11-28 08:28:58.038735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.985 [2024-11-28 08:28:58.038752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.985 [2024-11-28 08:28:58.038758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.985 [2024-11-28 08:28:58.048294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.985 [2024-11-28 08:28:58.048312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.985 [2024-11-28 08:28:58.048319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.985 [2024-11-28 08:28:58.059289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.985 [2024-11-28 08:28:58.059307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.985 [2024-11-28 08:28:58.059314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.985 [2024-11-28 08:28:58.067603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.985 [2024-11-28 08:28:58.067622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.985 [2024-11-28 08:28:58.067629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.985 [2024-11-28 08:28:58.077001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.985 [2024-11-28 08:28:58.077020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.985 [2024-11-28 08:28:58.077026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.985 [2024-11-28 08:28:58.085662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.985 [2024-11-28 08:28:58.085681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.985 [2024-11-28 08:28:58.085688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.985 [2024-11-28 08:28:58.094347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.985 [2024-11-28 08:28:58.094365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.985 [2024-11-28 08:28:58.094372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.985 [2024-11-28 08:28:58.104290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.985 [2024-11-28 08:28:58.104308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.985 [2024-11-28 08:28:58.104315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.985 [2024-11-28 08:28:58.112543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.985 [2024-11-28 08:28:58.112562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.985 [2024-11-28 08:28:58.112568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.985 [2024-11-28 08:28:58.122681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.985 [2024-11-28 08:28:58.122699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.986 [2024-11-28 08:28:58.122706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.986 [2024-11-28 08:28:58.131012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.986 [2024-11-28 08:28:58.131030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.986 [2024-11-28 08:28:58.131037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.986 [2024-11-28 08:28:58.140653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.986 [2024-11-28 08:28:58.140670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.986 [2024-11-28 08:28:58.140677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.986 [2024-11-28 08:28:58.150235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.986 [2024-11-28 08:28:58.150253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.986 [2024-11-28 08:28:58.150260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.986 [2024-11-28 08:28:58.158354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.986 [2024-11-28 08:28:58.158372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.986 [2024-11-28 08:28:58.158378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.986 [2024-11-28 08:28:58.167206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.986 [2024-11-28 08:28:58.167223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.986 [2024-11-28 08:28:58.167230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.986 [2024-11-28 08:28:58.176793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.986 [2024-11-28 08:28:58.176810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.986 [2024-11-28 08:28:58.176817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.986 27489.00 IOPS, 107.38 MiB/s [2024-11-28T07:28:58.275Z] [2024-11-28 08:28:58.187684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a26190) 00:30:00.986 [2024-11-28 08:28:58.187705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.986 [2024-11-28 08:28:58.187712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:00.986 00:30:00.986 Latency(us) 00:30:00.986 [2024-11-28T07:28:58.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:00.986 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:00.986 nvme0n1 : 2.01 27484.97 107.36 0.00 0.00 4650.63 2020.69 17803.95 00:30:00.986 [2024-11-28T07:28:58.275Z] =================================================================================================================== 00:30:00.986 [2024-11-28T07:28:58.275Z] Total : 27484.97 107.36 0.00 0.00 4650.63 2020.69 17803.95 00:30:00.986 { 00:30:00.986 "results": [ 00:30:00.986 { 00:30:00.986 "job": "nvme0n1", 00:30:00.986 "core_mask": "0x2", 00:30:00.986 "workload": "randread", 00:30:00.986 "status": "finished", 00:30:00.986 "queue_depth": 128, 00:30:00.986 "io_size": 4096, 00:30:00.986 "runtime": 2.005169, 00:30:00.986 "iops": 27484.965107679203, 00:30:00.986 "mibps": 107.36314495187189, 00:30:00.986 "io_failed": 0, 00:30:00.986 "io_timeout": 0, 00:30:00.986 "avg_latency_us": 4650.634832341414, 00:30:00.986 "min_latency_us": 2020.6933333333334, 00:30:00.986 "max_latency_us": 17803.946666666667 00:30:00.986 } 00:30:00.986 ], 00:30:00.986 "core_count": 1 00:30:00.986 } 00:30:00.986 08:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:00.986 08:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:00.986 08:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:00.986 | .driver_specific 00:30:00.986 | .nvme_error 00:30:00.986 | .status_code 00:30:00.986 | .command_transient_transport_error' 00:30:00.986 08:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:01.247 08:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 216 > 0 )) 00:30:01.247 08:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2146135 00:30:01.247 08:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2146135 ']' 00:30:01.247 08:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2146135 00:30:01.247 08:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:01.247 08:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:01.247 08:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2146135 00:30:01.247 08:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:01.247 08:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:01.247 08:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2146135' 00:30:01.247 killing process with pid 2146135 00:30:01.247 08:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2146135 00:30:01.247 Received shutdown signal, test time was about 2.000000 seconds 00:30:01.247 00:30:01.247 Latency(us) 00:30:01.247 [2024-11-28T07:28:58.536Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:01.247 [2024-11-28T07:28:58.536Z] =================================================================================================================== 00:30:01.247 [2024-11-28T07:28:58.536Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:01.247 08:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2146135 00:30:01.508 08:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:30:01.508 08:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:01.508 08:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:30:01.508 08:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:01.508 08:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:01.508 08:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2146906 00:30:01.508 08:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2146906 /var/tmp/bperf.sock 00:30:01.508 08:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2146906 ']' 00:30:01.508 08:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:30:01.508 08:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:01.508 08:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:01.508 08:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:01.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:01.508 08:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:01.508 08:28:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:01.508 [2024-11-28 08:28:58.616470] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:30:01.508 [2024-11-28 08:28:58.616544] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2146906 ] 00:30:01.508 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:01.508 Zero copy mechanism will not be used. 00:30:01.508 [2024-11-28 08:28:58.698694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.508 [2024-11-28 08:28:58.728231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:02.448 08:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:02.448 08:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:02.448 08:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:02.448 08:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:02.448 08:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:02.448 08:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.448 08:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:02.448 08:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.448 08:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:02.448 08:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:02.709 nvme0n1 00:30:02.709 08:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:02.709 08:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.709 08:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:02.709 08:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.709 08:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:02.709 08:28:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:02.971 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:02.971 Zero copy mechanism will not be used. 00:30:02.971 Running I/O for 2 seconds... 00:30:02.971 [2024-11-28 08:29:00.008105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:02.972 [2024-11-28 08:29:00.008139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.972 [2024-11-28 08:29:00.008148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.972 [2024-11-28 08:29:00.018167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:02.972 [2024-11-28 08:29:00.018190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.972 [2024-11-28 08:29:00.018198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.972 [2024-11-28 08:29:00.028656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:02.972 [2024-11-28 08:29:00.028678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.972 [2024-11-28 08:29:00.028686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.972 [2024-11-28 08:29:00.039031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:02.972 [2024-11-28 08:29:00.039049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.972 [2024-11-28 08:29:00.039057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.972 [2024-11-28 08:29:00.050042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:02.972 [2024-11-28 08:29:00.050063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.972 [2024-11-28 08:29:00.050069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.972 [2024-11-28 08:29:00.059816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:02.972 [2024-11-28 08:29:00.059838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.972 [2024-11-28 08:29:00.059844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.972 [2024-11-28 08:29:00.069848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:02.972 [2024-11-28 08:29:00.069867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.972 [2024-11-28 08:29:00.069874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.972 [2024-11-28 08:29:00.080651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:02.972 [2024-11-28 08:29:00.080671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.972 [2024-11-28 08:29:00.080678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.972 [2024-11-28 08:29:00.085035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:02.972 [2024-11-28 08:29:00.085053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.972 [2024-11-28 08:29:00.085060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.972 [2024-11-28 08:29:00.092034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:02.972 [2024-11-28 08:29:00.092052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.972 [2024-11-28 08:29:00.092059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.972 [2024-11-28 08:29:00.096584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:02.972 [2024-11-28 08:29:00.096602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.972 [2024-11-28 08:29:00.096609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.972 [2024-11-28 08:29:00.101556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:02.972 [2024-11-28 08:29:00.101575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.972 [2024-11-28 08:29:00.101582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.972 [2024-11-28 08:29:00.108643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:02.972 [2024-11-28 08:29:00.108662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.972 [2024-11-28 08:29:00.108669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.972 [2024-11-28 08:29:00.112923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:02.972 [2024-11-28 08:29:00.112942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.972 [2024-11-28 08:29:00.112949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.972 [2024-11-28 08:29:00.118774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:02.972 [2024-11-28 08:29:00.118792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.972 [2024-11-28 08:29:00.118799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.972 [2024-11-28 08:29:00.128231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:02.972 [2024-11-28 08:29:00.128249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.972 [2024-11-28 08:29:00.128262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.972 [2024-11-28 08:29:00.136427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:02.972 [2024-11-28 08:29:00.136446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.972 [2024-11-28 08:29:00.136452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.972 [2024-11-28 08:29:00.142196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:02.972 [2024-11-28 08:29:00.142214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.972 [2024-11-28 08:29:00.142221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.972 [2024-11-28 08:29:00.146944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:02.972 [2024-11-28 08:29:00.146962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.972 [2024-11-28 08:29:00.146969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.972 [2024-11-28 08:29:00.154553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:02.972 [2024-11-28 08:29:00.154573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.972 [2024-11-28 08:29:00.154579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.972 [2024-11-28 08:29:00.158892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:02.972 [2024-11-28 08:29:00.158910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.972 [2024-11-28 08:29:00.158917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.972 [2024-11-28 08:29:00.162946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:02.972 [2024-11-28 08:29:00.162965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.972 [2024-11-28 08:29:00.162972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.972 [2024-11-28 08:29:00.166725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:02.972 [2024-11-28 08:29:00.166744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.972 [2024-11-28 08:29:00.166750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.972 [2024-11-28 08:29:00.170513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:02.972 [2024-11-28 08:29:00.170532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.972 [2024-11-28 08:29:00.170538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.972 [2024-11-28 08:29:00.176832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:02.972 [2024-11-28 08:29:00.176855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.972 [2024-11-28 08:29:00.176862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.972 [2024-11-28 08:29:00.181758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:02.972 [2024-11-28 08:29:00.181777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.972 [2024-11-28 08:29:00.181784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.972 [2024-11-28 08:29:00.187022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:02.972 [2024-11-28 08:29:00.187041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.972 [2024-11-28 08:29:00.187048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.973 [2024-11-28 08:29:00.191336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:02.973 [2024-11-28 08:29:00.191354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.973 [2024-11-28 08:29:00.191360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.973 [2024-11-28 08:29:00.201974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:02.973 [2024-11-28 08:29:00.201994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.973 [2024-11-28 08:29:00.202000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.973 [2024-11-28 08:29:00.210784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:02.973 [2024-11-28 08:29:00.210803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.973 [2024-11-28 08:29:00.210809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.973 [2024-11-28 08:29:00.218256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:02.973 [2024-11-28 08:29:00.218275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.973 [2024-11-28 08:29:00.218282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.973 [2024-11-28 08:29:00.225192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:02.973 [2024-11-28 08:29:00.225220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.973 [2024-11-28 08:29:00.225227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.973 [2024-11-28 08:29:00.232716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:02.973 [2024-11-28 08:29:00.232735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.973 [2024-11-28 08:29:00.232742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.973 [2024-11-28 08:29:00.241504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:02.973 [2024-11-28 08:29:00.241523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.973 [2024-11-28 08:29:00.241530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.973 [2024-11-28 08:29:00.248075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:02.973 [2024-11-28 08:29:00.248094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.973 [2024-11-28 08:29:00.248101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.973 [2024-11-28 08:29:00.256421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:02.973 [2024-11-28 08:29:00.256441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.973 [2024-11-28 08:29:00.256448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.235 [2024-11-28 08:29:00.264709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.235 [2024-11-28 08:29:00.264729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.235 [2024-11-28 08:29:00.264736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.235 [2024-11-28 08:29:00.272141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.235 [2024-11-28 08:29:00.272167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.235 [2024-11-28 08:29:00.272174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.235 [2024-11-28 08:29:00.278082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.235 [2024-11-28 08:29:00.278101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.235 [2024-11-28 08:29:00.278108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.235 [2024-11-28 08:29:00.288862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.235 [2024-11-28 08:29:00.288882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.235 [2024-11-28 08:29:00.288888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.235 [2024-11-28 08:29:00.300937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.235 [2024-11-28 08:29:00.300957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.235 [2024-11-28 08:29:00.300963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.235 [2024-11-28 08:29:00.312981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.235 [2024-11-28 08:29:00.313000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.235 [2024-11-28 08:29:00.313013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.235 [2024-11-28 08:29:00.325091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.235 [2024-11-28 08:29:00.325111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.235 [2024-11-28 08:29:00.325117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.235 [2024-11-28 08:29:00.337245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.235 [2024-11-28 08:29:00.337263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.235 [2024-11-28 08:29:00.337270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.235 [2024-11-28 08:29:00.349737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.235 [2024-11-28 08:29:00.349756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.235 [2024-11-28 08:29:00.349763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.235 [2024-11-28 08:29:00.361958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.235 [2024-11-28 08:29:00.361978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.235 [2024-11-28 08:29:00.361985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.235 [2024-11-28 08:29:00.374597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.235 [2024-11-28 08:29:00.374616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.235 [2024-11-28 08:29:00.374622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.235 [2024-11-28 08:29:00.386606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.235 [2024-11-28 08:29:00.386626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.235 [2024-11-28 08:29:00.386632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.235 [2024-11-28 08:29:00.399569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.235 [2024-11-28 08:29:00.399588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.235 [2024-11-28 08:29:00.399595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.235 [2024-11-28 08:29:00.411781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.235 [2024-11-28 08:29:00.411801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.235 [2024-11-28 08:29:00.411808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.236 [2024-11-28 08:29:00.423801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.236 [2024-11-28 08:29:00.423823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.236 [2024-11-28 08:29:00.423830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.236 [2024-11-28 08:29:00.435843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.236 [2024-11-28 08:29:00.435863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.236 [2024-11-28 08:29:00.435869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.236 [2024-11-28 08:29:00.443351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.236 [2024-11-28 08:29:00.443370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.236 [2024-11-28 08:29:00.443377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.236 [2024-11-28 08:29:00.451596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.236 [2024-11-28 08:29:00.451616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.236 [2024-11-28 08:29:00.451622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.236 [2024-11-28 08:29:00.459230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.236 [2024-11-28 08:29:00.459249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.236 [2024-11-28 08:29:00.459256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.236 [2024-11-28 08:29:00.466183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.236 [2024-11-28 08:29:00.466201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.236 [2024-11-28 08:29:00.466207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.236 [2024-11-28 08:29:00.470902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.236 [2024-11-28 08:29:00.470920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.236 [2024-11-28 08:29:00.470926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.236 [2024-11-28 08:29:00.475752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.236 [2024-11-28 08:29:00.475770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.236 [2024-11-28 08:29:00.475777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.236 [2024-11-28 08:29:00.484851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.236 [2024-11-28 08:29:00.484869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.236 [2024-11-28 08:29:00.484878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.236 [2024-11-28 08:29:00.495109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.236 [2024-11-28 08:29:00.495128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.236 [2024-11-28 08:29:00.495134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.236 [2024-11-28 08:29:00.502986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.236 [2024-11-28 08:29:00.503004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.236 [2024-11-28 08:29:00.503010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.236 [2024-11-28 08:29:00.510072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.236 [2024-11-28 08:29:00.510090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.236 [2024-11-28 08:29:00.510096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.236 [2024-11-28 08:29:00.520276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.236 [2024-11-28 08:29:00.520294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.236 [2024-11-28 08:29:00.520300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.499 [2024-11-28 08:29:00.529688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.500 [2024-11-28 08:29:00.529707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.500 [2024-11-28 08:29:00.529714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.500 [2024-11-28 08:29:00.536838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.500 [2024-11-28 08:29:00.536855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.500 [2024-11-28 08:29:00.536862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.500 [2024-11-28 08:29:00.545102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.500 [2024-11-28 08:29:00.545120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.500 [2024-11-28 08:29:00.545126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.500 [2024-11-28 08:29:00.554761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.500 [2024-11-28 08:29:00.554780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.500 [2024-11-28 08:29:00.554787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.500 [2024-11-28 08:29:00.562118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.500 [2024-11-28 08:29:00.562139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.500 [2024-11-28 08:29:00.562146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.500 [2024-11-28 08:29:00.568745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.500 [2024-11-28 08:29:00.568762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.500 [2024-11-28 08:29:00.568769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.500 [2024-11-28 08:29:00.574307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.500 [2024-11-28 08:29:00.574325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.500 [2024-11-28 08:29:00.574332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.500 [2024-11-28 08:29:00.580942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.500 [2024-11-28 08:29:00.580959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.500 [2024-11-28 08:29:00.580965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.500 [2024-11-28 08:29:00.589887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.500 [2024-11-28 08:29:00.589905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.500 [2024-11-28 08:29:00.589912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.500 [2024-11-28 08:29:00.598220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.500 [2024-11-28 08:29:00.598237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.500 [2024-11-28 08:29:00.598244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.500 [2024-11-28 08:29:00.605922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.500 [2024-11-28 08:29:00.605940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.500 [2024-11-28 08:29:00.605947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.500 [2024-11-28 08:29:00.615095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.500 [2024-11-28 08:29:00.615114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.500 [2024-11-28 08:29:00.615120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.500 [2024-11-28 08:29:00.624234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.500 [2024-11-28 08:29:00.624253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.500 [2024-11-28 08:29:00.624260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.500 [2024-11-28 08:29:00.633684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.500 [2024-11-28 08:29:00.633703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.500 [2024-11-28 08:29:00.633710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.500 [2024-11-28 08:29:00.636920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.500 [2024-11-28 08:29:00.636939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.500 [2024-11-28 08:29:00.636945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.500 [2024-11-28 08:29:00.644260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.500 [2024-11-28 08:29:00.644279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.500 [2024-11-28 08:29:00.644286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.500 [2024-11-28 08:29:00.654317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.500 [2024-11-28 08:29:00.654336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.500 [2024-11-28 08:29:00.654343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.500 [2024-11-28 08:29:00.663332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.500 [2024-11-28 08:29:00.663351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.500 [2024-11-28 08:29:00.663358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.500 [2024-11-28 08:29:00.671048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.500 [2024-11-28 08:29:00.671067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.500 [2024-11-28 08:29:00.671074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.500 [2024-11-28 08:29:00.678278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.500 [2024-11-28 08:29:00.678296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.500 [2024-11-28 08:29:00.678303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.500 [2024-11-28 08:29:00.683989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.500 [2024-11-28 08:29:00.684007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.500 [2024-11-28 08:29:00.684014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.500 [2024-11-28 08:29:00.695770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.500 [2024-11-28 08:29:00.695789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.500 [2024-11-28 08:29:00.695798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.500 [2024-11-28 08:29:00.707586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.500 [2024-11-28 08:29:00.707605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.500 [2024-11-28 08:29:00.707611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.500 [2024-11-28 08:29:00.719275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.500 [2024-11-28 08:29:00.719294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.500 [2024-11-28 08:29:00.719300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.500 [2024-11-28 08:29:00.730804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.500 [2024-11-28 08:29:00.730823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.500 [2024-11-28 08:29:00.730830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.500 [2024-11-28 08:29:00.742758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.500 [2024-11-28 08:29:00.742776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.500 [2024-11-28 08:29:00.742783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.500 [2024-11-28 08:29:00.752670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.500 [2024-11-28 08:29:00.752690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.501 [2024-11-28 08:29:00.752697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.501 [2024-11-28 08:29:00.765382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.501 [2024-11-28 08:29:00.765401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.501 [2024-11-28 08:29:00.765408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.501 [2024-11-28 08:29:00.777702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.501 [2024-11-28 08:29:00.777721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.501 [2024-11-28 08:29:00.777727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.762 [2024-11-28 08:29:00.789974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.762 [2024-11-28 08:29:00.789993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-11-28 08:29:00.790000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.762 [2024-11-28 08:29:00.800596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.762 [2024-11-28 08:29:00.800620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-11-28 08:29:00.800627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.762 [2024-11-28 08:29:00.808224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.762 [2024-11-28 08:29:00.808242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-11-28 08:29:00.808248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.762 [2024-11-28 08:29:00.814391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.762 [2024-11-28 08:29:00.814410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.762 [2024-11-28 08:29:00.814416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.763 [2024-11-28 08:29:00.825400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.763 [2024-11-28 08:29:00.825419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.763 [2024-11-28 08:29:00.825425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.763 [2024-11-28 08:29:00.836084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.763 [2024-11-28 08:29:00.836103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.763 [2024-11-28 08:29:00.836110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.763 [2024-11-28 08:29:00.847373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.763 [2024-11-28 08:29:00.847391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.763 [2024-11-28 08:29:00.847397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.763 [2024-11-28 08:29:00.857885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.763 [2024-11-28 08:29:00.857903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.763 [2024-11-28 08:29:00.857911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.763 [2024-11-28 08:29:00.868843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.763 [2024-11-28 08:29:00.868862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.763 [2024-11-28 08:29:00.868869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.763 [2024-11-28 08:29:00.879187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.763 [2024-11-28 08:29:00.879206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.763 [2024-11-28 08:29:00.879213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.763 [2024-11-28 08:29:00.889877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.763 [2024-11-28 08:29:00.889895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.763 [2024-11-28 08:29:00.889902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.763 [2024-11-28 08:29:00.900851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.763 [2024-11-28 08:29:00.900870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.763 [2024-11-28 08:29:00.900876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.763 [2024-11-28 08:29:00.912505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.763 [2024-11-28 08:29:00.912523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.763 [2024-11-28 08:29:00.912530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.763 [2024-11-28 08:29:00.921361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.763 [2024-11-28 08:29:00.921380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.763 [2024-11-28 08:29:00.921387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.763 [2024-11-28 08:29:00.932573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.763 [2024-11-28 08:29:00.932592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.763 [2024-11-28 08:29:00.932599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.763 [2024-11-28 08:29:00.944649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.763 [2024-11-28 08:29:00.944667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.763 [2024-11-28 08:29:00.944674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.763 [2024-11-28 08:29:00.954529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.763 [2024-11-28 08:29:00.954548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.763 [2024-11-28 08:29:00.954554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.763 [2024-11-28 08:29:00.965706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.763 [2024-11-28 08:29:00.965725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.763 [2024-11-28 08:29:00.965731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.763 [2024-11-28 08:29:00.976658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.763 [2024-11-28 08:29:00.976676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.763 [2024-11-28 08:29:00.976686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.763 [2024-11-28 08:29:00.987399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.763 [2024-11-28 08:29:00.987418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.763 [2024-11-28 08:29:00.987425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.763 [2024-11-28 08:29:00.996165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.763 [2024-11-28 08:29:00.996183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.763 [2024-11-28 08:29:00.996189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.763 3489.00 IOPS, 436.12 MiB/s [2024-11-28T07:29:01.052Z] [2024-11-28 08:29:01.006973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.763 [2024-11-28 08:29:01.006991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.763 [2024-11-28 08:29:01.006998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:03.763 [2024-11-28 08:29:01.016428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.763 [2024-11-28 08:29:01.016447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.763 [2024-11-28 08:29:01.016454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:03.763 [2024-11-28 08:29:01.027563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.763 [2024-11-28 08:29:01.027581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.763 [2024-11-28 08:29:01.027588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:03.763 [2024-11-28 08:29:01.038233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.763 [2024-11-28 08:29:01.038252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.763 [2024-11-28 08:29:01.038259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:03.763 [2024-11-28 08:29:01.048574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:03.763 [2024-11-28 08:29:01.048593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:03.763 [2024-11-28 08:29:01.048600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.025 [2024-11-28 08:29:01.056971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.025 [2024-11-28 08:29:01.056990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.025 [2024-11-28 08:29:01.056997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.025 [2024-11-28 08:29:01.067033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.025 [2024-11-28 08:29:01.067055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.025 [2024-11-28 08:29:01.067062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.025 [2024-11-28 08:29:01.075605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.025 [2024-11-28 08:29:01.075624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.025 [2024-11-28 08:29:01.075631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.025 [2024-11-28 08:29:01.083052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.025 [2024-11-28 08:29:01.083070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.025 [2024-11-28 08:29:01.083076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.025 [2024-11-28 08:29:01.093847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.025 [2024-11-28 08:29:01.093866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.025 [2024-11-28 08:29:01.093873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.025 [2024-11-28 08:29:01.105920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.025 [2024-11-28 08:29:01.105938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.025 [2024-11-28 08:29:01.105944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.025 [2024-11-28 08:29:01.118585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.025 [2024-11-28 08:29:01.118604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.025 [2024-11-28 08:29:01.118610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.025 [2024-11-28 08:29:01.130996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.025 [2024-11-28 08:29:01.131015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.025 [2024-11-28 08:29:01.131023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.025 [2024-11-28 08:29:01.140677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.025 [2024-11-28 08:29:01.140696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.025 [2024-11-28 08:29:01.140702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.025 [2024-11-28 08:29:01.151198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.025 [2024-11-28 08:29:01.151216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.025 [2024-11-28 08:29:01.151223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.025 [2024-11-28 08:29:01.162683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.025 [2024-11-28 08:29:01.162702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.025 [2024-11-28 08:29:01.162708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.025 [2024-11-28 08:29:01.173905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.025 [2024-11-28 08:29:01.173924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.025 [2024-11-28 08:29:01.173930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.025 [2024-11-28 08:29:01.186214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.025 [2024-11-28 08:29:01.186232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.025 [2024-11-28 08:29:01.186238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.025 [2024-11-28 08:29:01.198363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.025 [2024-11-28 08:29:01.198382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.025 [2024-11-28 08:29:01.198389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.025 [2024-11-28 08:29:01.210826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.025 [2024-11-28 08:29:01.210845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.025 [2024-11-28 08:29:01.210852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.025 [2024-11-28 08:29:01.222932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.025 [2024-11-28 08:29:01.222951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.025 [2024-11-28 08:29:01.222957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.025 [2024-11-28 08:29:01.233868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.025 [2024-11-28 08:29:01.233887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.025 [2024-11-28 08:29:01.233894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.025 [2024-11-28 08:29:01.243729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.025 [2024-11-28 08:29:01.243748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.025 [2024-11-28 08:29:01.243755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.025 [2024-11-28 08:29:01.255197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.025 [2024-11-28 08:29:01.255216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.025 [2024-11-28 08:29:01.255225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.025 [2024-11-28 08:29:01.267089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.025 [2024-11-28 08:29:01.267108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.025 [2024-11-28 08:29:01.267115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.025 [2024-11-28 08:29:01.279869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.025 [2024-11-28 08:29:01.279887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.025 [2024-11-28 08:29:01.279893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.025 [2024-11-28 08:29:01.291075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.025 [2024-11-28 08:29:01.291094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.025 [2024-11-28 08:29:01.291101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.025 [2024-11-28 08:29:01.302082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.025 [2024-11-28 08:29:01.302101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.025 [2024-11-28 08:29:01.302107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.286 [2024-11-28 08:29:01.314503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.286 [2024-11-28 08:29:01.314522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.286 [2024-11-28 08:29:01.314529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.286 [2024-11-28 08:29:01.324341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.286 [2024-11-28 08:29:01.324361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.286 [2024-11-28 08:29:01.324368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.286 [2024-11-28 08:29:01.336264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.286 [2024-11-28 08:29:01.336283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.286 [2024-11-28 08:29:01.336290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.286 [2024-11-28 08:29:01.347458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.286 [2024-11-28 08:29:01.347476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.286 [2024-11-28 08:29:01.347483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.286 [2024-11-28 08:29:01.358899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.286 [2024-11-28 08:29:01.358917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.286 [2024-11-28 08:29:01.358924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.286 [2024-11-28 08:29:01.370031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.286 [2024-11-28 08:29:01.370049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.286 [2024-11-28 08:29:01.370056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.287 [2024-11-28 08:29:01.381435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.287 [2024-11-28 08:29:01.381454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.287 [2024-11-28 08:29:01.381460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.287 [2024-11-28 08:29:01.392592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.287 [2024-11-28 08:29:01.392611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.287 [2024-11-28 08:29:01.392618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.287 [2024-11-28 08:29:01.404277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.287 [2024-11-28 08:29:01.404296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.287 [2024-11-28 08:29:01.404302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.287 [2024-11-28 08:29:01.416430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.287 [2024-11-28 08:29:01.416448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.287 [2024-11-28 08:29:01.416455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.287 [2024-11-28 08:29:01.426634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.287 [2024-11-28 08:29:01.426652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.287 [2024-11-28 08:29:01.426658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.287 [2024-11-28 08:29:01.436997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.287 [2024-11-28 08:29:01.437016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.287 [2024-11-28 08:29:01.437022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.287 [2024-11-28 08:29:01.446708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.287 [2024-11-28 08:29:01.446726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.287 [2024-11-28 08:29:01.446736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.287 [2024-11-28 08:29:01.457459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.287 [2024-11-28 08:29:01.457478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.287 [2024-11-28 08:29:01.457484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.287 [2024-11-28 08:29:01.466500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.287 [2024-11-28 08:29:01.466518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.287 [2024-11-28 08:29:01.466525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.287 [2024-11-28 08:29:01.478380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.287 [2024-11-28 08:29:01.478399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.287 [2024-11-28 08:29:01.478406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.287 [2024-11-28 08:29:01.489692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.287 [2024-11-28 08:29:01.489711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.287 [2024-11-28 08:29:01.489717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.287 [2024-11-28 08:29:01.498320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.287 [2024-11-28 08:29:01.498338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.287 [2024-11-28 08:29:01.498345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.287 [2024-11-28 08:29:01.509659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.287 [2024-11-28 08:29:01.509678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.287 [2024-11-28 08:29:01.509685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.287 [2024-11-28 08:29:01.521900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.287 [2024-11-28 08:29:01.521919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.287 [2024-11-28 08:29:01.521926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.287 [2024-11-28 08:29:01.530920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.287 [2024-11-28 08:29:01.530939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.287 [2024-11-28 08:29:01.530948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.287 [2024-11-28 08:29:01.542714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.287 [2024-11-28 08:29:01.542736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.287 [2024-11-28 08:29:01.542742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.287 [2024-11-28 08:29:01.554457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.287 [2024-11-28 08:29:01.554477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.287 [2024-11-28 08:29:01.554483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.287 [2024-11-28 08:29:01.565806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.287 [2024-11-28 08:29:01.565825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.287 [2024-11-28 08:29:01.565832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.548 [2024-11-28 08:29:01.576806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.548 [2024-11-28 08:29:01.576825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.548 [2024-11-28 08:29:01.576831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.548 [2024-11-28 08:29:01.587959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.548 [2024-11-28 08:29:01.587978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.548 [2024-11-28 08:29:01.587984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.548 [2024-11-28 08:29:01.599977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.548 [2024-11-28 08:29:01.599996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.548 [2024-11-28 08:29:01.600002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.548 [2024-11-28 08:29:01.612063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.548 [2024-11-28 08:29:01.612082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.548 [2024-11-28 08:29:01.612089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.548 [2024-11-28 08:29:01.622910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.548 [2024-11-28 08:29:01.622928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.548 [2024-11-28 08:29:01.622935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.548 [2024-11-28 08:29:01.633372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.549 [2024-11-28 08:29:01.633391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.549 [2024-11-28 08:29:01.633398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.549 [2024-11-28 08:29:01.643388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.549 [2024-11-28 08:29:01.643408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.549 [2024-11-28 08:29:01.643414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.549 [2024-11-28 08:29:01.654074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.549 [2024-11-28 08:29:01.654094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.549 [2024-11-28 08:29:01.654100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.549 [2024-11-28 08:29:01.665267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.549 [2024-11-28 08:29:01.665287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.549 [2024-11-28 08:29:01.665293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.549 [2024-11-28 08:29:01.676616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.549 [2024-11-28 08:29:01.676636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.549 [2024-11-28 08:29:01.676642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.549 [2024-11-28 08:29:01.688202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.549 [2024-11-28 08:29:01.688222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.549 [2024-11-28 08:29:01.688228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.549 [2024-11-28 08:29:01.699535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.549 [2024-11-28 08:29:01.699555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.549 [2024-11-28 08:29:01.699562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.549 [2024-11-28 08:29:01.706595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.549 [2024-11-28 08:29:01.706614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.549 [2024-11-28 08:29:01.706621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.549 [2024-11-28 08:29:01.716183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.549 [2024-11-28 08:29:01.716202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.549 [2024-11-28 08:29:01.716209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.549 [2024-11-28 08:29:01.727715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.549 [2024-11-28 08:29:01.727735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.549 [2024-11-28 08:29:01.727745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.549 [2024-11-28 08:29:01.737330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.549 [2024-11-28 08:29:01.737349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.549 [2024-11-28 08:29:01.737355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.549 [2024-11-28 08:29:01.747802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.549 [2024-11-28 08:29:01.747822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.549 [2024-11-28 08:29:01.747829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.549 [2024-11-28 08:29:01.759886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.549 [2024-11-28 08:29:01.759905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.549 [2024-11-28 08:29:01.759912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.549 [2024-11-28 08:29:01.770881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.549 [2024-11-28 08:29:01.770900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.549 [2024-11-28 08:29:01.770907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.549 [2024-11-28 08:29:01.783155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.549 [2024-11-28 08:29:01.783180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.549 [2024-11-28 08:29:01.783187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.549 [2024-11-28 08:29:01.795298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.549 [2024-11-28 08:29:01.795318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.549 [2024-11-28 08:29:01.795325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.549 [2024-11-28 08:29:01.804477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.549 [2024-11-28 08:29:01.804496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.549 [2024-11-28 08:29:01.804502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.549 [2024-11-28 08:29:01.815465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.549 [2024-11-28 08:29:01.815485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.549 [2024-11-28 08:29:01.815491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.549 [2024-11-28 08:29:01.826426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.549 [2024-11-28 08:29:01.826448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.549 [2024-11-28 08:29:01.826455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.810 [2024-11-28 08:29:01.837901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.810 [2024-11-28 08:29:01.837921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.810 [2024-11-28 08:29:01.837927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.810 [2024-11-28 08:29:01.845362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.810 [2024-11-28 08:29:01.845382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.810 [2024-11-28 08:29:01.845390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.810 [2024-11-28 08:29:01.856136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.810 [2024-11-28 08:29:01.856154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.810 [2024-11-28 08:29:01.856166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.810 [2024-11-28 08:29:01.862969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.810 [2024-11-28 08:29:01.862987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.810 [2024-11-28 08:29:01.862994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.810 [2024-11-28 08:29:01.873928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.810 [2024-11-28 08:29:01.873946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.810 [2024-11-28 08:29:01.873953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.810 [2024-11-28 08:29:01.885483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.810 [2024-11-28 08:29:01.885502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.810 [2024-11-28 08:29:01.885508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.810 [2024-11-28 08:29:01.897641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.810 [2024-11-28 08:29:01.897660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.810 [2024-11-28 08:29:01.897667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.810 [2024-11-28 08:29:01.910125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.810 [2024-11-28 08:29:01.910145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.810 [2024-11-28 08:29:01.910151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.810 [2024-11-28 08:29:01.921150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.810 [2024-11-28 08:29:01.921173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.810 [2024-11-28 08:29:01.921180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.810 [2024-11-28 08:29:01.932273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.810 [2024-11-28 08:29:01.932293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.810 [2024-11-28 08:29:01.932299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.810 [2024-11-28 08:29:01.942846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.810 [2024-11-28 08:29:01.942866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.810 [2024-11-28 08:29:01.942872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.810 [2024-11-28 08:29:01.955114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.810 [2024-11-28 08:29:01.955133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.810 [2024-11-28 08:29:01.955140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.810 [2024-11-28 08:29:01.964654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.811 [2024-11-28 08:29:01.964674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.811 [2024-11-28 08:29:01.964680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.811 [2024-11-28 08:29:01.975626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.811 [2024-11-28 08:29:01.975645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.811 [2024-11-28 08:29:01.975652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:04.811 [2024-11-28 08:29:01.986891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.811 [2024-11-28 08:29:01.986911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.811 [2024-11-28 08:29:01.986918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:04.811 [2024-11-28 08:29:01.998984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.811 [2024-11-28 08:29:01.999003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.811 [2024-11-28 08:29:01.999010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:04.811 3172.50 IOPS, 396.56 MiB/s [2024-11-28T07:29:02.100Z] [2024-11-28 08:29:02.009425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c7570) 00:30:04.811 [2024-11-28 08:29:02.009445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.811 [2024-11-28 08:29:02.009455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:04.811 00:30:04.811 Latency(us) 00:30:04.811 [2024-11-28T07:29:02.100Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.811 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:04.811 nvme0n1 : 2.05 3109.21 388.65 0.00 0.00 5046.77 512.00 46967.47 00:30:04.811 [2024-11-28T07:29:02.100Z] =================================================================================================================== 00:30:04.811 [2024-11-28T07:29:02.100Z] Total : 3109.21 388.65 0.00 0.00 5046.77 512.00 46967.47 00:30:04.811 { 00:30:04.811 "results": [ 00:30:04.811 { 00:30:04.811 "job": "nvme0n1", 00:30:04.811 "core_mask": "0x2", 00:30:04.811 "workload": "randread", 00:30:04.811 "status": "finished", 00:30:04.811 "queue_depth": 16, 00:30:04.811 "io_size": 131072, 00:30:04.811 "runtime": 2.045856, 00:30:04.811 "iops": 3109.211987549466, 00:30:04.811 "mibps": 388.65149844368324, 00:30:04.811 "io_failed": 0, 00:30:04.811 "io_timeout": 0, 00:30:04.811 "avg_latency_us": 5046.774850914427, 00:30:04.811 "min_latency_us": 512.0, 00:30:04.811 "max_latency_us": 46967.46666666667 00:30:04.811 } 00:30:04.811 ], 00:30:04.811 "core_count": 1 00:30:04.811 } 00:30:04.811 08:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:04.811 08:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:04.811 08:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:04.811 | .driver_specific 00:30:04.811 | .nvme_error 00:30:04.811 | .status_code 00:30:04.811 | .command_transient_transport_error' 00:30:04.811 08:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:05.071 08:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 206 > 0 )) 00:30:05.071 08:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2146906 00:30:05.071 08:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2146906 ']' 00:30:05.071 08:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2146906 00:30:05.071 08:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:05.071 08:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:05.071 08:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2146906 00:30:05.071 08:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:05.071 08:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:05.071 08:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2146906' 00:30:05.071 killing process with pid 2146906 00:30:05.071 08:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2146906 00:30:05.071 Received shutdown signal, test time was about 2.000000 seconds 00:30:05.071 00:30:05.071 Latency(us) 00:30:05.071 [2024-11-28T07:29:02.360Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:05.071 [2024-11-28T07:29:02.360Z] =================================================================================================================== 00:30:05.071 [2024-11-28T07:29:02.360Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:05.071 08:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2146906 00:30:05.332 08:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:30:05.332 08:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:05.332 08:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:05.332 08:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:05.332 08:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:05.332 08:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2147649 00:30:05.332 08:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2147649 /var/tmp/bperf.sock 00:30:05.332 08:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2147649 ']' 00:30:05.332 08:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:30:05.332 08:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:05.332 08:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:05.332 08:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:05.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:05.332 08:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:05.332 08:29:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:05.332 [2024-11-28 08:29:02.483047] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:30:05.332 [2024-11-28 08:29:02.483105] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2147649 ] 00:30:05.332 [2024-11-28 08:29:02.566194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.332 [2024-11-28 08:29:02.596003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.273 08:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:06.273 08:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:06.273 08:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:06.273 08:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:06.273 08:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:06.273 08:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.273 08:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:06.273 08:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.273 08:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:06.273 08:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:06.534 nvme0n1 00:30:06.795 08:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:06.795 08:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.795 08:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:06.795 08:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.795 08:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:06.795 08:29:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:06.795 Running I/O for 2 seconds... 00:30:06.795 [2024-11-28 08:29:03.937733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:06.795 [2024-11-28 08:29:03.938038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.795 [2024-11-28 08:29:03.938065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:06.795 [2024-11-28 08:29:03.946658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:06.795 [2024-11-28 08:29:03.946897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.795 [2024-11-28 08:29:03.946914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:06.795 [2024-11-28 08:29:03.955650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:06.795 [2024-11-28 08:29:03.955939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.795 [2024-11-28 08:29:03.955957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:06.795 [2024-11-28 08:29:03.964496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:06.795 [2024-11-28 08:29:03.964787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.795 [2024-11-28 08:29:03.964805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:06.795 [2024-11-28 08:29:03.973419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:06.795 [2024-11-28 08:29:03.973659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.795 [2024-11-28 08:29:03.973676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:06.795 [2024-11-28 08:29:03.982234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:06.795 [2024-11-28 08:29:03.982544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.795 [2024-11-28 08:29:03.982561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:06.795 [2024-11-28 08:29:03.991076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:06.795 [2024-11-28 08:29:03.991359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.795 [2024-11-28 08:29:03.991376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:06.795 [2024-11-28 08:29:03.999897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:06.795 [2024-11-28 08:29:04.000129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.795 [2024-11-28 08:29:04.000149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:06.795 [2024-11-28 08:29:04.008707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:06.795 [2024-11-28 08:29:04.008957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.795 [2024-11-28 08:29:04.008974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:06.795 [2024-11-28 08:29:04.017489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:06.795 [2024-11-28 08:29:04.017783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.795 [2024-11-28 08:29:04.017800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:06.795 [2024-11-28 08:29:04.026322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:06.795 [2024-11-28 08:29:04.026564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.795 [2024-11-28 08:29:04.026581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:06.795 [2024-11-28 08:29:04.035113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:06.795 [2024-11-28 08:29:04.035412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.795 [2024-11-28 08:29:04.035428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:06.795 [2024-11-28 08:29:04.043870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:06.795 [2024-11-28 08:29:04.044112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.795 [2024-11-28 08:29:04.044129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:06.795 [2024-11-28 08:29:04.052666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:06.795 [2024-11-28 08:29:04.052915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.795 [2024-11-28 08:29:04.052932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:06.795 [2024-11-28 08:29:04.061429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:06.796 [2024-11-28 08:29:04.061567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.796 [2024-11-28 08:29:04.061583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:06.796 [2024-11-28 08:29:04.070214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:06.796 [2024-11-28 08:29:04.070482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.796 [2024-11-28 08:29:04.070499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:06.796 [2024-11-28 08:29:04.079023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:06.796 [2024-11-28 08:29:04.079170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.796 [2024-11-28 08:29:04.079186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.057 [2024-11-28 08:29:04.087799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.057 [2024-11-28 08:29:04.088059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.057 [2024-11-28 08:29:04.088076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.057 [2024-11-28 08:29:04.096589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.057 [2024-11-28 08:29:04.096827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.057 [2024-11-28 08:29:04.096843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.057 [2024-11-28 08:29:04.105374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.057 [2024-11-28 08:29:04.105632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.057 [2024-11-28 08:29:04.105649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.057 [2024-11-28 08:29:04.114197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.057 [2024-11-28 08:29:04.114434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.057 [2024-11-28 08:29:04.114450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.057 [2024-11-28 08:29:04.123002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.057 [2024-11-28 08:29:04.123259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.057 [2024-11-28 08:29:04.123274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.057 [2024-11-28 08:29:04.131793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.057 [2024-11-28 08:29:04.132022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.057 [2024-11-28 08:29:04.132037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.057 [2024-11-28 08:29:04.140619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.057 [2024-11-28 08:29:04.140894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.057 [2024-11-28 08:29:04.140910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.057 [2024-11-28 08:29:04.149388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.057 [2024-11-28 08:29:04.149648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.057 [2024-11-28 08:29:04.149665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.057 [2024-11-28 08:29:04.158134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.057 [2024-11-28 08:29:04.158406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.057 [2024-11-28 08:29:04.158423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.057 [2024-11-28 08:29:04.166997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.057 [2024-11-28 08:29:04.167315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.057 [2024-11-28 08:29:04.167332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.057 [2024-11-28 08:29:04.175778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.057 [2024-11-28 08:29:04.176015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.057 [2024-11-28 08:29:04.176039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.057 [2024-11-28 08:29:04.184562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.057 [2024-11-28 08:29:04.184710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.057 [2024-11-28 08:29:04.184726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.057 [2024-11-28 08:29:04.193363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.057 [2024-11-28 08:29:04.193621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.057 [2024-11-28 08:29:04.193637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.057 [2024-11-28 08:29:04.202105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.057 [2024-11-28 08:29:04.202389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.057 [2024-11-28 08:29:04.202406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.058 [2024-11-28 08:29:04.210848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.058 [2024-11-28 08:29:04.211100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.058 [2024-11-28 08:29:04.211117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.058 [2024-11-28 08:29:04.219700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.058 [2024-11-28 08:29:04.219957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.058 [2024-11-28 08:29:04.219973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.058 [2024-11-28 08:29:04.228464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.058 [2024-11-28 08:29:04.228690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.058 [2024-11-28 08:29:04.228707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.058 [2024-11-28 08:29:04.237249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.058 [2024-11-28 08:29:04.237528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.058 [2024-11-28 08:29:04.237544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.058 [2024-11-28 08:29:04.245988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.058 [2024-11-28 08:29:04.246258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.058 [2024-11-28 08:29:04.246274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.058 [2024-11-28 08:29:04.254800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.058 [2024-11-28 08:29:04.255068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.058 [2024-11-28 08:29:04.255084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.058 [2024-11-28 08:29:04.263536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.058 [2024-11-28 08:29:04.263776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.058 [2024-11-28 08:29:04.263793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.058 [2024-11-28 08:29:04.272363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.058 [2024-11-28 08:29:04.272597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.058 [2024-11-28 08:29:04.272613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.058 [2024-11-28 08:29:04.281114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.058 [2024-11-28 08:29:04.281369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.058 [2024-11-28 08:29:04.281385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.058 [2024-11-28 08:29:04.289912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.058 [2024-11-28 08:29:04.290172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.058 [2024-11-28 08:29:04.290188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.058 [2024-11-28 08:29:04.298714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.058 [2024-11-28 08:29:04.298939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.058 [2024-11-28 08:29:04.298954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.058 [2024-11-28 08:29:04.307479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.058 [2024-11-28 08:29:04.307748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.058 [2024-11-28 08:29:04.307764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.058 [2024-11-28 08:29:04.316234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.058 [2024-11-28 08:29:04.316468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.058 [2024-11-28 08:29:04.316494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.058 [2024-11-28 08:29:04.325019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.058 [2024-11-28 08:29:04.325263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.058 [2024-11-28 08:29:04.325278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.058 [2024-11-28 08:29:04.333780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.058 [2024-11-28 08:29:04.334031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.058 [2024-11-28 08:29:04.334047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.058 [2024-11-28 08:29:04.342522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.058 [2024-11-28 08:29:04.342752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.058 [2024-11-28 08:29:04.342767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.320 [2024-11-28 08:29:04.351236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.320 [2024-11-28 08:29:04.351470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.320 [2024-11-28 08:29:04.351485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.320 [2024-11-28 08:29:04.359985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.320 [2024-11-28 08:29:04.360220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.320 [2024-11-28 08:29:04.360235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.320 [2024-11-28 08:29:04.368773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.320 [2024-11-28 08:29:04.369010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.320 [2024-11-28 08:29:04.369027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.320 [2024-11-28 08:29:04.377510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.320 [2024-11-28 08:29:04.377756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.320 [2024-11-28 08:29:04.377776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.320 [2024-11-28 08:29:04.386304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.320 [2024-11-28 08:29:04.386526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.320 [2024-11-28 08:29:04.386541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.320 [2024-11-28 08:29:04.395040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.320 [2024-11-28 08:29:04.395292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.320 [2024-11-28 08:29:04.395307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.320 [2024-11-28 08:29:04.403795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.320 [2024-11-28 08:29:04.404063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.320 [2024-11-28 08:29:04.404079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.320 [2024-11-28 08:29:04.412521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.320 [2024-11-28 08:29:04.412811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.320 [2024-11-28 08:29:04.412827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.320 [2024-11-28 08:29:04.421323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.320 [2024-11-28 08:29:04.421571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.320 [2024-11-28 08:29:04.421585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.320 [2024-11-28 08:29:04.430097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.320 [2024-11-28 08:29:04.430353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.320 [2024-11-28 08:29:04.430370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.320 [2024-11-28 08:29:04.438846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.320 [2024-11-28 08:29:04.439113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.320 [2024-11-28 08:29:04.439130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.320 [2024-11-28 08:29:04.447561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.320 [2024-11-28 08:29:04.447803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.320 [2024-11-28 08:29:04.447819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.320 [2024-11-28 08:29:04.456349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.320 [2024-11-28 08:29:04.456573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.320 [2024-11-28 08:29:04.456590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.320 [2024-11-28 08:29:04.465072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.320 [2024-11-28 08:29:04.465375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.320 [2024-11-28 08:29:04.465391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.320 [2024-11-28 08:29:04.473931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.320 [2024-11-28 08:29:04.474171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.320 [2024-11-28 08:29:04.474186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.320 [2024-11-28 08:29:04.482680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.320 [2024-11-28 08:29:04.482919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.320 [2024-11-28 08:29:04.482936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.320 [2024-11-28 08:29:04.491437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.321 [2024-11-28 08:29:04.491659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.321 [2024-11-28 08:29:04.491674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.321 [2024-11-28 08:29:04.500193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.321 [2024-11-28 08:29:04.500490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.321 [2024-11-28 08:29:04.500506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.321 [2024-11-28 08:29:04.509018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.321 [2024-11-28 08:29:04.509275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.321 [2024-11-28 08:29:04.509290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.321 [2024-11-28 08:29:04.517774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.321 [2024-11-28 08:29:04.518024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.321 [2024-11-28 08:29:04.518040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.321 [2024-11-28 08:29:04.526525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.321 [2024-11-28 08:29:04.526776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.321 [2024-11-28 08:29:04.526792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.321 [2024-11-28 08:29:04.535244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.321 [2024-11-28 08:29:04.535497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.321 [2024-11-28 08:29:04.535514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.321 [2024-11-28 08:29:04.544058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.321 [2024-11-28 08:29:04.544317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.321 [2024-11-28 08:29:04.544332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.321 [2024-11-28 08:29:04.552761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.321 [2024-11-28 08:29:04.553061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.321 [2024-11-28 08:29:04.553077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.321 [2024-11-28 08:29:04.561517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.321 [2024-11-28 08:29:04.561793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.321 [2024-11-28 08:29:04.561809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.321 [2024-11-28 08:29:04.570365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.321 [2024-11-28 08:29:04.570628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.321 [2024-11-28 08:29:04.570644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.321 [2024-11-28 08:29:04.579089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.321 [2024-11-28 08:29:04.579358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.321 [2024-11-28 08:29:04.579374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.321 [2024-11-28 08:29:04.587852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.321 [2024-11-28 08:29:04.588090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.321 [2024-11-28 08:29:04.588107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.321 [2024-11-28 08:29:04.596596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.321 [2024-11-28 08:29:04.596890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.321 [2024-11-28 08:29:04.596907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.321 [2024-11-28 08:29:04.605336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.321 [2024-11-28 08:29:04.605587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.321 [2024-11-28 08:29:04.605606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.583 [2024-11-28 08:29:04.614147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.583 [2024-11-28 08:29:04.614443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.583 [2024-11-28 08:29:04.614459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.584 [2024-11-28 08:29:04.622891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.584 [2024-11-28 08:29:04.623165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.584 [2024-11-28 08:29:04.623181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.584 [2024-11-28 08:29:04.631644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.584 [2024-11-28 08:29:04.631872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.584 [2024-11-28 08:29:04.631887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.584 [2024-11-28 08:29:04.640419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.584 [2024-11-28 08:29:04.640668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.584 [2024-11-28 08:29:04.640684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.584 [2024-11-28 08:29:04.649173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.584 [2024-11-28 08:29:04.649440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.584 [2024-11-28 08:29:04.649456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.584 [2024-11-28 08:29:04.657945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.584 [2024-11-28 08:29:04.658261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.584 [2024-11-28 08:29:04.658277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.584 [2024-11-28 08:29:04.666798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.584 [2024-11-28 08:29:04.667051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.584 [2024-11-28 08:29:04.667068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.584 [2024-11-28 08:29:04.675544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.584 [2024-11-28 08:29:04.675855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.584 [2024-11-28 08:29:04.675870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.584 [2024-11-28 08:29:04.684332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.584 [2024-11-28 08:29:04.684582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.584 [2024-11-28 08:29:04.684599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.584 [2024-11-28 08:29:04.693058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.584 [2024-11-28 08:29:04.693351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.584 [2024-11-28 08:29:04.693367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.584 [2024-11-28 08:29:04.701838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.584 [2024-11-28 08:29:04.702103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.584 [2024-11-28 08:29:04.702118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.584 [2024-11-28 08:29:04.710551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.584 [2024-11-28 08:29:04.710802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.584 [2024-11-28 08:29:04.710817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.584 [2024-11-28 08:29:04.719315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.584 [2024-11-28 08:29:04.719595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.584 [2024-11-28 08:29:04.719610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.584 [2024-11-28 08:29:04.728123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.584 [2024-11-28 08:29:04.728355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.584 [2024-11-28 08:29:04.728370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.584 [2024-11-28 08:29:04.736908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.584 [2024-11-28 08:29:04.737177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.584 [2024-11-28 08:29:04.737191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.584 [2024-11-28 08:29:04.745656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.584 [2024-11-28 08:29:04.745939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.584 [2024-11-28 08:29:04.745955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.584 [2024-11-28 08:29:04.754459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.584 [2024-11-28 08:29:04.754705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.584 [2024-11-28 08:29:04.754720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.584 [2024-11-28 08:29:04.763239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.584 [2024-11-28 08:29:04.763523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.584 [2024-11-28 08:29:04.763539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.584 [2024-11-28 08:29:04.771995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.584 [2024-11-28 08:29:04.772235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.584 [2024-11-28 08:29:04.772250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.584 [2024-11-28 08:29:04.780730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.584 [2024-11-28 08:29:04.780978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.584 [2024-11-28 08:29:04.780993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.584 [2024-11-28 08:29:04.789502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.584 [2024-11-28 08:29:04.789770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.584 [2024-11-28 08:29:04.789785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.584 [2024-11-28 08:29:04.798274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.584 [2024-11-28 08:29:04.798550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.584 [2024-11-28 08:29:04.798564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.584 [2024-11-28 08:29:04.807001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.584 [2024-11-28 08:29:04.807137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.584 [2024-11-28 08:29:04.807152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.584 [2024-11-28 08:29:04.815734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.584 [2024-11-28 08:29:04.815870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.584 [2024-11-28 08:29:04.815885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.584 [2024-11-28 08:29:04.824519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.584 [2024-11-28 08:29:04.824764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.584 [2024-11-28 08:29:04.824779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.584 [2024-11-28 08:29:04.833278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.584 [2024-11-28 08:29:04.833528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.584 [2024-11-28 08:29:04.833546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.584 [2024-11-28 08:29:04.842039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.584 [2024-11-28 08:29:04.842275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.584 [2024-11-28 08:29:04.842290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.584 [2024-11-28 08:29:04.850841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.584 [2024-11-28 08:29:04.851073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.585 [2024-11-28 08:29:04.851088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.585 [2024-11-28 08:29:04.859549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.585 [2024-11-28 08:29:04.859817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.585 [2024-11-28 08:29:04.859832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.585 [2024-11-28 08:29:04.868375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.585 [2024-11-28 08:29:04.868630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.585 [2024-11-28 08:29:04.868645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.847 [2024-11-28 08:29:04.877154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.847 [2024-11-28 08:29:04.877440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.847 [2024-11-28 08:29:04.877456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.847 [2024-11-28 08:29:04.885912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.847 [2024-11-28 08:29:04.886172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.847 [2024-11-28 08:29:04.886187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.847 [2024-11-28 08:29:04.894689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.847 [2024-11-28 08:29:04.894961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.847 [2024-11-28 08:29:04.894976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.847 [2024-11-28 08:29:04.903422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.847 [2024-11-28 08:29:04.903698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.847 [2024-11-28 08:29:04.903713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.847 [2024-11-28 08:29:04.912138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.847 [2024-11-28 08:29:04.912415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.847 [2024-11-28 08:29:04.912431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.847 [2024-11-28 08:29:04.920920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.847 [2024-11-28 08:29:04.921165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.847 [2024-11-28 08:29:04.921180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.847 [2024-11-28 08:29:04.929714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.847 [2024-11-28 08:29:04.930734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.847 [2024-11-28 08:29:04.930750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.847 29046.00 IOPS, 113.46 MiB/s [2024-11-28T07:29:05.136Z] [2024-11-28 08:29:04.938473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.847 [2024-11-28 08:29:04.938747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.847 [2024-11-28 08:29:04.938762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.847 [2024-11-28 08:29:04.947217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.847 [2024-11-28 08:29:04.947466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.847 [2024-11-28 08:29:04.947481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.847 [2024-11-28 08:29:04.956085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.848 [2024-11-28 08:29:04.956322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.848 [2024-11-28 08:29:04.956337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.848 [2024-11-28 08:29:04.964841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.848 [2024-11-28 08:29:04.965100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.848 [2024-11-28 08:29:04.965114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.848 [2024-11-28 08:29:04.973610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.848 [2024-11-28 08:29:04.973870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.848 [2024-11-28 08:29:04.973885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.848 [2024-11-28 08:29:04.982383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.848 [2024-11-28 08:29:04.982611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.848 [2024-11-28 08:29:04.982626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.848 [2024-11-28 08:29:04.991127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.848 [2024-11-28 08:29:04.991382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.848 [2024-11-28 08:29:04.991397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.848 [2024-11-28 08:29:04.999901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.848 [2024-11-28 08:29:05.000198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.848 [2024-11-28 08:29:05.000214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.848 [2024-11-28 08:29:05.008683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.848 [2024-11-28 08:29:05.008952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.848 [2024-11-28 08:29:05.008968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.848 [2024-11-28 08:29:05.017466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.848 [2024-11-28 08:29:05.017717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.848 [2024-11-28 08:29:05.017732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.848 [2024-11-28 08:29:05.026290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.848 [2024-11-28 08:29:05.026542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.848 [2024-11-28 08:29:05.026557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.848 [2024-11-28 08:29:05.035059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.848 [2024-11-28 08:29:05.035197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.848 [2024-11-28 08:29:05.035212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.848 [2024-11-28 08:29:05.043784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.848 [2024-11-28 08:29:05.044047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.848 [2024-11-28 08:29:05.044063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.848 [2024-11-28 08:29:05.052556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.848 [2024-11-28 08:29:05.052792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.848 [2024-11-28 08:29:05.052807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.848 [2024-11-28 08:29:05.061277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.848 [2024-11-28 08:29:05.061571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.848 [2024-11-28 08:29:05.061588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.848 [2024-11-28 08:29:05.070122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.848 [2024-11-28 08:29:05.070370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.848 [2024-11-28 08:29:05.070385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.848 [2024-11-28 08:29:05.078854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.848 [2024-11-28 08:29:05.079112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.848 [2024-11-28 08:29:05.079127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.848 [2024-11-28 08:29:05.087579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.848 [2024-11-28 08:29:05.087856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.848 [2024-11-28 08:29:05.087872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.848 [2024-11-28 08:29:05.096327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.848 [2024-11-28 08:29:05.096610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.848 [2024-11-28 08:29:05.096632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.848 [2024-11-28 08:29:05.105095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.848 [2024-11-28 08:29:05.105386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.848 [2024-11-28 08:29:05.105402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.848 [2024-11-28 08:29:05.113813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.848 [2024-11-28 08:29:05.114084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.848 [2024-11-28 08:29:05.114098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.848 [2024-11-28 08:29:05.122541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.848 [2024-11-28 08:29:05.122811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.848 [2024-11-28 08:29:05.122826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:07.848 [2024-11-28 08:29:05.131342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:07.848 [2024-11-28 08:29:05.131605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:07.848 [2024-11-28 08:29:05.131620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.112 [2024-11-28 08:29:05.140121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.112 [2024-11-28 08:29:05.140521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.112 [2024-11-28 08:29:05.140538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.112 [2024-11-28 08:29:05.148937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.112 [2024-11-28 08:29:05.149176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.112 [2024-11-28 08:29:05.149191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.112 [2024-11-28 08:29:05.157692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.112 [2024-11-28 08:29:05.157942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.112 [2024-11-28 08:29:05.157956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.112 [2024-11-28 08:29:05.166419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.112 [2024-11-28 08:29:05.166688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.112 [2024-11-28 08:29:05.166702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.112 [2024-11-28 08:29:05.175244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.112 [2024-11-28 08:29:05.175497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.112 [2024-11-28 08:29:05.175513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.112 [2024-11-28 08:29:05.184019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.112 [2024-11-28 08:29:05.184280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.112 [2024-11-28 08:29:05.184296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.112 [2024-11-28 08:29:05.192772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.112 [2024-11-28 08:29:05.193010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.112 [2024-11-28 08:29:05.193026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.112 [2024-11-28 08:29:05.201515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.112 [2024-11-28 08:29:05.201749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.112 [2024-11-28 08:29:05.201765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.113 [2024-11-28 08:29:05.210278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.113 [2024-11-28 08:29:05.210525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.113 [2024-11-28 08:29:05.210541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.113 [2024-11-28 08:29:05.219068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.113 [2024-11-28 08:29:05.219329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.113 [2024-11-28 08:29:05.219352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.113 [2024-11-28 08:29:05.227825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.113 [2024-11-28 08:29:05.227958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.113 [2024-11-28 08:29:05.227973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.113 [2024-11-28 08:29:05.236564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.113 [2024-11-28 08:29:05.236835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.113 [2024-11-28 08:29:05.236850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.113 [2024-11-28 08:29:05.245343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.113 [2024-11-28 08:29:05.245582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.113 [2024-11-28 08:29:05.245596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.113 [2024-11-28 08:29:05.254119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.113 [2024-11-28 08:29:05.254406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.113 [2024-11-28 08:29:05.254422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.113 [2024-11-28 08:29:05.262916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.113 [2024-11-28 08:29:05.263156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.113 [2024-11-28 08:29:05.263176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.113 [2024-11-28 08:29:05.271646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.113 [2024-11-28 08:29:05.271936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.113 [2024-11-28 08:29:05.271951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.113 [2024-11-28 08:29:05.280455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.113 [2024-11-28 08:29:05.280683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.113 [2024-11-28 08:29:05.280698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.113 [2024-11-28 08:29:05.289209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.113 [2024-11-28 08:29:05.289462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.113 [2024-11-28 08:29:05.289481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.113 [2024-11-28 08:29:05.297979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.113 [2024-11-28 08:29:05.298243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.113 [2024-11-28 08:29:05.298259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.113 [2024-11-28 08:29:05.306910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.113 [2024-11-28 08:29:05.307184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.113 [2024-11-28 08:29:05.307199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.113 [2024-11-28 08:29:05.315702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.113 [2024-11-28 08:29:05.315964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.113 [2024-11-28 08:29:05.315979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.113 [2024-11-28 08:29:05.324474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.113 [2024-11-28 08:29:05.324731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.113 [2024-11-28 08:29:05.324746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.113 [2024-11-28 08:29:05.333238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.113 [2024-11-28 08:29:05.333511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.113 [2024-11-28 08:29:05.333527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.113 [2024-11-28 08:29:05.341971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.113 [2024-11-28 08:29:05.342285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.113 [2024-11-28 08:29:05.342301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.113 [2024-11-28 08:29:05.350784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.113 [2024-11-28 08:29:05.351073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.113 [2024-11-28 08:29:05.351089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.113 [2024-11-28 08:29:05.359593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.113 [2024-11-28 08:29:05.359866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.113 [2024-11-28 08:29:05.359881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.113 [2024-11-28 08:29:05.368373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.113 [2024-11-28 08:29:05.368626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.113 [2024-11-28 08:29:05.368641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.113 [2024-11-28 08:29:05.377168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.113 [2024-11-28 08:29:05.377414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.113 [2024-11-28 08:29:05.377430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.113 [2024-11-28 08:29:05.385912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.113 [2024-11-28 08:29:05.386210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.113 [2024-11-28 08:29:05.386224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.113 [2024-11-28 08:29:05.394665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.113 [2024-11-28 08:29:05.394955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.113 [2024-11-28 08:29:05.394970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.375 [2024-11-28 08:29:05.403436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.375 [2024-11-28 08:29:05.403758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.375 [2024-11-28 08:29:05.403774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.375 [2024-11-28 08:29:05.412240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.375 [2024-11-28 08:29:05.412528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.375 [2024-11-28 08:29:05.412544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.375 [2024-11-28 08:29:05.421031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.375 [2024-11-28 08:29:05.421302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.375 [2024-11-28 08:29:05.421318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.375 [2024-11-28 08:29:05.429744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.375 [2024-11-28 08:29:05.430034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.375 [2024-11-28 08:29:05.430049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.375 [2024-11-28 08:29:05.438622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.375 [2024-11-28 08:29:05.438757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.375 [2024-11-28 08:29:05.438772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.375 [2024-11-28 08:29:05.447374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.375 [2024-11-28 08:29:05.447640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.375 [2024-11-28 08:29:05.447656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.375 [2024-11-28 08:29:05.456113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.375 [2024-11-28 08:29:05.456364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.375 [2024-11-28 08:29:05.456379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.376 [2024-11-28 08:29:05.464842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.376 [2024-11-28 08:29:05.465097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.376 [2024-11-28 08:29:05.465112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.376 [2024-11-28 08:29:05.473584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.376 [2024-11-28 08:29:05.473872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.376 [2024-11-28 08:29:05.473888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.376 [2024-11-28 08:29:05.482363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.376 [2024-11-28 08:29:05.482628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.376 [2024-11-28 08:29:05.482642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.376 [2024-11-28 08:29:05.491104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.376 [2024-11-28 08:29:05.491402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.376 [2024-11-28 08:29:05.491417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.376 [2024-11-28 08:29:05.499880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.376 [2024-11-28 08:29:05.500163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.376 [2024-11-28 08:29:05.500179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.376 [2024-11-28 08:29:05.508650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.376 [2024-11-28 08:29:05.508939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.376 [2024-11-28 08:29:05.508955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.376 [2024-11-28 08:29:05.517405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.376 [2024-11-28 08:29:05.517640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.376 [2024-11-28 08:29:05.517657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.376 [2024-11-28 08:29:05.526186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.376 [2024-11-28 08:29:05.526438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.376 [2024-11-28 08:29:05.526453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.376 [2024-11-28 08:29:05.534927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.376 [2024-11-28 08:29:05.535206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.376 [2024-11-28 08:29:05.535227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.376 [2024-11-28 08:29:05.543723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.376 [2024-11-28 08:29:05.543989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.376 [2024-11-28 08:29:05.544005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.376 [2024-11-28 08:29:05.552612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.376 [2024-11-28 08:29:05.552870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.376 [2024-11-28 08:29:05.552886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.376 [2024-11-28 08:29:05.561360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.376 [2024-11-28 08:29:05.561619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.376 [2024-11-28 08:29:05.561635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.376 [2024-11-28 08:29:05.570127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.376 [2024-11-28 08:29:05.570408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.376 [2024-11-28 08:29:05.570424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.376 [2024-11-28 08:29:05.578826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.376 [2024-11-28 08:29:05.579111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.376 [2024-11-28 08:29:05.579127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.376 [2024-11-28 08:29:05.587629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.376 [2024-11-28 08:29:05.587861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.376 [2024-11-28 08:29:05.587877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.376 [2024-11-28 08:29:05.596409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.376 [2024-11-28 08:29:05.596667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.376 [2024-11-28 08:29:05.596682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.376 [2024-11-28 08:29:05.605168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.376 [2024-11-28 08:29:05.605505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.376 [2024-11-28 08:29:05.605521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.376 [2024-11-28 08:29:05.613981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.376 [2024-11-28 08:29:05.614259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.376 [2024-11-28 08:29:05.614275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.376 [2024-11-28 08:29:05.622730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.376 [2024-11-28 08:29:05.622967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.376 [2024-11-28 08:29:05.622982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.376 [2024-11-28 08:29:05.631487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.376 [2024-11-28 08:29:05.631725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.376 [2024-11-28 08:29:05.631740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.376 [2024-11-28 08:29:05.640262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.376 [2024-11-28 08:29:05.640557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.376 [2024-11-28 08:29:05.640572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.376 [2024-11-28 08:29:05.649006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.376 [2024-11-28 08:29:05.649188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.376 [2024-11-28 08:29:05.649202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.376 [2024-11-28 08:29:05.657809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.376 [2024-11-28 08:29:05.658066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.376 [2024-11-28 08:29:05.658081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.638 [2024-11-28 08:29:05.666615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.638 [2024-11-28 08:29:05.666844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.638 [2024-11-28 08:29:05.666865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.638 [2024-11-28 08:29:05.675359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.638 [2024-11-28 08:29:05.675622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.638 [2024-11-28 08:29:05.675637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.638 [2024-11-28 08:29:05.684182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.638 [2024-11-28 08:29:05.684436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.638 [2024-11-28 08:29:05.684452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.638 [2024-11-28 08:29:05.693031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.638 [2024-11-28 08:29:05.693291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.638 [2024-11-28 08:29:05.693307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.638 [2024-11-28 08:29:05.701751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.638 [2024-11-28 08:29:05.701986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.638 [2024-11-28 08:29:05.702001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.638 [2024-11-28 08:29:05.710512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.638 [2024-11-28 08:29:05.710744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.638 [2024-11-28 08:29:05.710759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.638 [2024-11-28 08:29:05.719266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.638 [2024-11-28 08:29:05.719539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.638 [2024-11-28 08:29:05.719554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.638 [2024-11-28 08:29:05.727972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.638 [2024-11-28 08:29:05.728230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.638 [2024-11-28 08:29:05.728245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.638 [2024-11-28 08:29:05.736805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.638 [2024-11-28 08:29:05.737048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.638 [2024-11-28 08:29:05.737063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.638 [2024-11-28 08:29:05.745587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.638 [2024-11-28 08:29:05.745858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.638 [2024-11-28 08:29:05.745873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.638 [2024-11-28 08:29:05.754363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.638 [2024-11-28 08:29:05.754633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.638 [2024-11-28 08:29:05.754648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.638 [2024-11-28 08:29:05.763083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.638 [2024-11-28 08:29:05.763357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.638 [2024-11-28 08:29:05.763372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.638 [2024-11-28 08:29:05.771849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.638 [2024-11-28 08:29:05.772088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.638 [2024-11-28 08:29:05.772103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.638 [2024-11-28 08:29:05.780623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.638 [2024-11-28 08:29:05.780886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.638 [2024-11-28 08:29:05.780901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.638 [2024-11-28 08:29:05.789445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.638 [2024-11-28 08:29:05.789676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.638 [2024-11-28 08:29:05.789692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.638 [2024-11-28 08:29:05.798224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.638 [2024-11-28 08:29:05.798499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.638 [2024-11-28 08:29:05.798515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.638 [2024-11-28 08:29:05.807030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.638 [2024-11-28 08:29:05.807303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.638 [2024-11-28 08:29:05.807319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.638 [2024-11-28 08:29:05.815803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.639 [2024-11-28 08:29:05.816093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.639 [2024-11-28 08:29:05.816108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.639 [2024-11-28 08:29:05.824627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.639 [2024-11-28 08:29:05.824863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.639 [2024-11-28 08:29:05.824877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.639 [2024-11-28 08:29:05.833422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.639 [2024-11-28 08:29:05.833651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.639 [2024-11-28 08:29:05.833666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.639 [2024-11-28 08:29:05.842208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.639 [2024-11-28 08:29:05.842473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.639 [2024-11-28 08:29:05.842488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.639 [2024-11-28 08:29:05.850937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.639 [2024-11-28 08:29:05.851199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.639 [2024-11-28 08:29:05.851214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.639 [2024-11-28 08:29:05.859713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.639 [2024-11-28 08:29:05.859957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.639 [2024-11-28 08:29:05.859971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.639 [2024-11-28 08:29:05.868508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.639 [2024-11-28 08:29:05.868773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.639 [2024-11-28 08:29:05.868788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.639 [2024-11-28 08:29:05.877373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.639 [2024-11-28 08:29:05.877609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.639 [2024-11-28 08:29:05.877624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.639 [2024-11-28 08:29:05.886155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.639 [2024-11-28 08:29:05.886441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.639 [2024-11-28 08:29:05.886457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.639 [2024-11-28 08:29:05.894867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.639 [2024-11-28 08:29:05.895104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.639 [2024-11-28 08:29:05.895122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.639 [2024-11-28 08:29:05.903657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.639 [2024-11-28 08:29:05.903799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.639 [2024-11-28 08:29:05.903814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.639 [2024-11-28 08:29:05.912506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.639 [2024-11-28 08:29:05.912755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.639 [2024-11-28 08:29:05.912770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.639 [2024-11-28 08:29:05.921334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.639 [2024-11-28 08:29:05.921586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.639 [2024-11-28 08:29:05.921601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.900 [2024-11-28 08:29:05.930097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22343d0) with pdu=0x200016efda78 00:30:08.900 [2024-11-28 08:29:05.930349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:08.900 [2024-11-28 08:29:05.930364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:08.900 29089.50 IOPS, 113.63 MiB/s 00:30:08.900 Latency(us) 00:30:08.900 [2024-11-28T07:29:06.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:08.900 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:08.900 nvme0n1 : 2.01 29091.17 113.64 0.00 0.00 4392.60 2034.35 9448.11 00:30:08.900 [2024-11-28T07:29:06.189Z] =================================================================================================================== 00:30:08.900 [2024-11-28T07:29:06.189Z] Total : 29091.17 113.64 0.00 0.00 4392.60 2034.35 9448.11 00:30:08.900 { 00:30:08.900 "results": [ 00:30:08.900 { 00:30:08.900 "job": "nvme0n1", 00:30:08.900 "core_mask": "0x2", 00:30:08.900 "workload": "randwrite", 00:30:08.900 "status": "finished", 00:30:08.900 "queue_depth": 128, 00:30:08.900 "io_size": 4096, 00:30:08.900 "runtime": 2.005385, 00:30:08.900 "iops": 29091.172019337933, 00:30:08.900 "mibps": 113.6373907005388, 00:30:08.900 "io_failed": 0, 00:30:08.900 "io_timeout": 0, 00:30:08.900 "avg_latency_us": 4392.601283761006, 00:30:08.900 "min_latency_us": 2034.3466666666666, 00:30:08.900 "max_latency_us": 9448.106666666667 00:30:08.900 } 00:30:08.901 ], 00:30:08.901 "core_count": 1 00:30:08.901 } 00:30:08.901 08:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:08.901 08:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:08.901 | .driver_specific 00:30:08.901 | .nvme_error 00:30:08.901 | .status_code 00:30:08.901 | .command_transient_transport_error' 00:30:08.901 08:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:08.901 08:29:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:08.901 08:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 228 > 0 )) 00:30:08.901 08:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2147649 00:30:08.901 08:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2147649 ']' 00:30:08.901 08:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2147649 00:30:08.901 08:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:08.901 08:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:08.901 08:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2147649 00:30:09.161 08:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:09.161 08:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:09.161 08:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2147649' 00:30:09.161 killing process with pid 2147649 00:30:09.161 08:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2147649 00:30:09.161 Received shutdown signal, test time was about 2.000000 seconds 00:30:09.161 00:30:09.161 Latency(us) 00:30:09.161 [2024-11-28T07:29:06.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:09.161 [2024-11-28T07:29:06.450Z] =================================================================================================================== 00:30:09.161 [2024-11-28T07:29:06.450Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:09.161 08:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2147649 00:30:09.161 08:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:30:09.161 08:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:09.161 08:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:09.161 08:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:09.161 08:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:09.161 08:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2148338 00:30:09.161 08:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2148338 /var/tmp/bperf.sock 00:30:09.161 08:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2148338 ']' 00:30:09.161 08:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:30:09.161 08:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:09.161 08:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:09.161 08:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:09.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:09.162 08:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:09.162 08:29:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:09.162 [2024-11-28 08:29:06.358126] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:30:09.162 [2024-11-28 08:29:06.358189] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2148338 ] 00:30:09.162 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:09.162 Zero copy mechanism will not be used. 00:30:09.162 [2024-11-28 08:29:06.439926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.421 [2024-11-28 08:29:06.469376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:09.994 08:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:09.994 08:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:09.994 08:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:09.994 08:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:10.255 08:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:10.255 08:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.255 08:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:10.255 08:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.255 08:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:10.255 08:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:10.515 nvme0n1 00:30:10.515 08:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:10.515 08:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.515 08:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:10.515 08:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.515 08:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:10.515 08:29:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:10.777 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:10.777 Zero copy mechanism will not be used. 00:30:10.777 Running I/O for 2 seconds... 00:30:10.777 [2024-11-28 08:29:07.811011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.777 [2024-11-28 08:29:07.811275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.777 [2024-11-28 08:29:07.811300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:10.777 [2024-11-28 08:29:07.815543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.777 [2024-11-28 08:29:07.815601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.777 [2024-11-28 08:29:07.815619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:10.777 [2024-11-28 08:29:07.818977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.777 [2024-11-28 08:29:07.819055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.777 [2024-11-28 08:29:07.819071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:10.777 [2024-11-28 08:29:07.822502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.777 [2024-11-28 08:29:07.822556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.777 [2024-11-28 08:29:07.822573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:10.777 [2024-11-28 08:29:07.826206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.777 [2024-11-28 08:29:07.826291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.777 [2024-11-28 08:29:07.826307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:10.777 [2024-11-28 08:29:07.829726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.777 [2024-11-28 08:29:07.829802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.777 [2024-11-28 08:29:07.829819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:10.777 [2024-11-28 08:29:07.833148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.777 [2024-11-28 08:29:07.833217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.777 [2024-11-28 08:29:07.833233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:10.777 [2024-11-28 08:29:07.837610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.777 [2024-11-28 08:29:07.837660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.777 [2024-11-28 08:29:07.837677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:10.777 [2024-11-28 08:29:07.842529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.777 [2024-11-28 08:29:07.842578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.777 [2024-11-28 08:29:07.842594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:10.777 [2024-11-28 08:29:07.848117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.777 [2024-11-28 08:29:07.848185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.777 [2024-11-28 08:29:07.848202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:10.777 [2024-11-28 08:29:07.858901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.777 [2024-11-28 08:29:07.859134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.777 [2024-11-28 08:29:07.859149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:10.777 [2024-11-28 08:29:07.863706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.777 [2024-11-28 08:29:07.863776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.777 [2024-11-28 08:29:07.863792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:10.777 [2024-11-28 08:29:07.867511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.777 [2024-11-28 08:29:07.867582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.777 [2024-11-28 08:29:07.867598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:10.777 [2024-11-28 08:29:07.871366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.777 [2024-11-28 08:29:07.871431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.777 [2024-11-28 08:29:07.871447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:10.777 [2024-11-28 08:29:07.874944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.777 [2024-11-28 08:29:07.874998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.777 [2024-11-28 08:29:07.875014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:10.777 [2024-11-28 08:29:07.878561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.777 [2024-11-28 08:29:07.878606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.777 [2024-11-28 08:29:07.878622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:10.777 [2024-11-28 08:29:07.882093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.777 [2024-11-28 08:29:07.882136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.777 [2024-11-28 08:29:07.882152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:10.777 [2024-11-28 08:29:07.886850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.777 [2024-11-28 08:29:07.886918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.777 [2024-11-28 08:29:07.886933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:10.777 [2024-11-28 08:29:07.894840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.777 [2024-11-28 08:29:07.895148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.777 [2024-11-28 08:29:07.895171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:10.777 [2024-11-28 08:29:07.898925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.778 [2024-11-28 08:29:07.898983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.778 [2024-11-28 08:29:07.898999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:10.778 [2024-11-28 08:29:07.902834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.778 [2024-11-28 08:29:07.902901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.778 [2024-11-28 08:29:07.902920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:10.778 [2024-11-28 08:29:07.906500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.778 [2024-11-28 08:29:07.906567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.778 [2024-11-28 08:29:07.906583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:10.778 [2024-11-28 08:29:07.911038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.778 [2024-11-28 08:29:07.911085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.778 [2024-11-28 08:29:07.911101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:10.778 [2024-11-28 08:29:07.916950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.778 [2024-11-28 08:29:07.917011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.778 [2024-11-28 08:29:07.917027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:10.778 [2024-11-28 08:29:07.924005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.778 [2024-11-28 08:29:07.924081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.778 [2024-11-28 08:29:07.924096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:10.778 [2024-11-28 08:29:07.928150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.778 [2024-11-28 08:29:07.928219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.778 [2024-11-28 08:29:07.928235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:10.778 [2024-11-28 08:29:07.932027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.778 [2024-11-28 08:29:07.932102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.778 [2024-11-28 08:29:07.932117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:10.778 [2024-11-28 08:29:07.936266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.778 [2024-11-28 08:29:07.936324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.778 [2024-11-28 08:29:07.936339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:10.778 [2024-11-28 08:29:07.940739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.778 [2024-11-28 08:29:07.940969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.778 [2024-11-28 08:29:07.940984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:10.778 [2024-11-28 08:29:07.945989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.778 [2024-11-28 08:29:07.946056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.778 [2024-11-28 08:29:07.946071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:10.778 [2024-11-28 08:29:07.951894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.778 [2024-11-28 08:29:07.951957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.778 [2024-11-28 08:29:07.951972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:10.778 [2024-11-28 08:29:07.960964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.778 [2024-11-28 08:29:07.961024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.778 [2024-11-28 08:29:07.961039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:10.778 [2024-11-28 08:29:07.971050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.778 [2024-11-28 08:29:07.971292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.778 [2024-11-28 08:29:07.971308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:10.778 [2024-11-28 08:29:07.978445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.778 [2024-11-28 08:29:07.978519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.778 [2024-11-28 08:29:07.978534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:10.778 [2024-11-28 08:29:07.988013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.778 [2024-11-28 08:29:07.988069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.778 [2024-11-28 08:29:07.988084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:10.778 [2024-11-28 08:29:07.995303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.778 [2024-11-28 08:29:07.995495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.778 [2024-11-28 08:29:07.995510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:10.778 [2024-11-28 08:29:08.000353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.778 [2024-11-28 08:29:08.000410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.778 [2024-11-28 08:29:08.000425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:10.778 [2024-11-28 08:29:08.007448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.778 [2024-11-28 08:29:08.007694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.778 [2024-11-28 08:29:08.007710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:10.778 [2024-11-28 08:29:08.013283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.778 [2024-11-28 08:29:08.013354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.778 [2024-11-28 08:29:08.013369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:10.778 [2024-11-28 08:29:08.017355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.778 [2024-11-28 08:29:08.017434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.778 [2024-11-28 08:29:08.017449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:10.778 [2024-11-28 08:29:08.021969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.778 [2024-11-28 08:29:08.022035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.778 [2024-11-28 08:29:08.022050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:10.778 [2024-11-28 08:29:08.027600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.778 [2024-11-28 08:29:08.027644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.778 [2024-11-28 08:29:08.027659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:10.778 [2024-11-28 08:29:08.034071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.778 [2024-11-28 08:29:08.034350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.778 [2024-11-28 08:29:08.034366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:10.778 [2024-11-28 08:29:08.043756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.778 [2024-11-28 08:29:08.043891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.778 [2024-11-28 08:29:08.043906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:10.778 [2024-11-28 08:29:08.052667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:10.778 [2024-11-28 08:29:08.052931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.778 [2024-11-28 08:29:08.052946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:11.040 [2024-11-28 08:29:08.064276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.040 [2024-11-28 08:29:08.064342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.040 [2024-11-28 08:29:08.064357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:11.040 [2024-11-28 08:29:08.075155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.040 [2024-11-28 08:29:08.075254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.040 [2024-11-28 08:29:08.075272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:11.040 [2024-11-28 08:29:08.086299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.040 [2024-11-28 08:29:08.086602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.040 [2024-11-28 08:29:08.086618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:11.040 [2024-11-28 08:29:08.097186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.040 [2024-11-28 08:29:08.097414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.040 [2024-11-28 08:29:08.097429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:11.040 [2024-11-28 08:29:08.107764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.040 [2024-11-28 08:29:08.108030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.040 [2024-11-28 08:29:08.108045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:11.040 [2024-11-28 08:29:08.118954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.040 [2024-11-28 08:29:08.119233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.040 [2024-11-28 08:29:08.119249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:11.040 [2024-11-28 08:29:08.129782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.040 [2024-11-28 08:29:08.130030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.040 [2024-11-28 08:29:08.130047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:11.040 [2024-11-28 08:29:08.140854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.040 [2024-11-28 08:29:08.141166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.040 [2024-11-28 08:29:08.141184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:11.040 [2024-11-28 08:29:08.153045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.040 [2024-11-28 08:29:08.153304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.040 [2024-11-28 08:29:08.153320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:11.040 [2024-11-28 08:29:08.164037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.040 [2024-11-28 08:29:08.164259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.040 [2024-11-28 08:29:08.164275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:11.040 [2024-11-28 08:29:08.175323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.040 [2024-11-28 08:29:08.175661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.040 [2024-11-28 08:29:08.175679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:11.040 [2024-11-28 08:29:08.187248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.040 [2024-11-28 08:29:08.187464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.040 [2024-11-28 08:29:08.187481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:11.040 [2024-11-28 08:29:08.198867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.040 [2024-11-28 08:29:08.199130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.040 [2024-11-28 08:29:08.199146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:11.040 [2024-11-28 08:29:08.209706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.041 [2024-11-28 08:29:08.210076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.041 [2024-11-28 08:29:08.210093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:11.041 [2024-11-28 08:29:08.220508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.041 [2024-11-28 08:29:08.220839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.041 [2024-11-28 08:29:08.220856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:11.041 [2024-11-28 08:29:08.228110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.041 [2024-11-28 08:29:08.228315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.041 [2024-11-28 08:29:08.228331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:11.041 [2024-11-28 08:29:08.236590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.041 [2024-11-28 08:29:08.236878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.041 [2024-11-28 08:29:08.236895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:11.041 [2024-11-28 08:29:08.245543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.041 [2024-11-28 08:29:08.245833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.041 [2024-11-28 08:29:08.245850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:11.041 [2024-11-28 08:29:08.254710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.041 [2024-11-28 08:29:08.255030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.041 [2024-11-28 08:29:08.255047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:11.041 [2024-11-28 08:29:08.263389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.041 [2024-11-28 08:29:08.263730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.041 [2024-11-28 08:29:08.263747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:11.041 [2024-11-28 08:29:08.273949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.041 [2024-11-28 08:29:08.274295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.041 [2024-11-28 08:29:08.274312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:11.041 [2024-11-28 08:29:08.283363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.041 [2024-11-28 08:29:08.283681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.041 [2024-11-28 08:29:08.283697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:11.041 [2024-11-28 08:29:08.290294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.041 [2024-11-28 08:29:08.290576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.041 [2024-11-28 08:29:08.290592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:11.041 [2024-11-28 08:29:08.298199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.041 [2024-11-28 08:29:08.298388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.041 [2024-11-28 08:29:08.298405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:11.041 [2024-11-28 08:29:08.303425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.041 [2024-11-28 08:29:08.303613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.041 [2024-11-28 08:29:08.303629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:11.041 [2024-11-28 08:29:08.312249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.041 [2024-11-28 08:29:08.312439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.041 [2024-11-28 08:29:08.312456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:11.041 [2024-11-28 08:29:08.317395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.041 [2024-11-28 08:29:08.317597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.041 [2024-11-28 08:29:08.317613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:11.041 [2024-11-28 08:29:08.324229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.041 [2024-11-28 08:29:08.324421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.041 [2024-11-28 08:29:08.324440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:11.303 [2024-11-28 08:29:08.333007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.303 [2024-11-28 08:29:08.333311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.303 [2024-11-28 08:29:08.333329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:11.303 [2024-11-28 08:29:08.341473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.303 [2024-11-28 08:29:08.341679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.303 [2024-11-28 08:29:08.341695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:11.303 [2024-11-28 08:29:08.348546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.303 [2024-11-28 08:29:08.348843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.303 [2024-11-28 08:29:08.348861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:11.303 [2024-11-28 08:29:08.355824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.303 [2024-11-28 08:29:08.356113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.303 [2024-11-28 08:29:08.356130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:11.303 [2024-11-28 08:29:08.364659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.303 [2024-11-28 08:29:08.364950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.303 [2024-11-28 08:29:08.364968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:11.303 [2024-11-28 08:29:08.371701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.303 [2024-11-28 08:29:08.371822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.303 [2024-11-28 08:29:08.371838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:11.303 [2024-11-28 08:29:08.378393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.303 [2024-11-28 08:29:08.378583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.303 [2024-11-28 08:29:08.378599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:11.303 [2024-11-28 08:29:08.386875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.303 [2024-11-28 08:29:08.387065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.303 [2024-11-28 08:29:08.387081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:11.303 [2024-11-28 08:29:08.396357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.303 [2024-11-28 08:29:08.396691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.303 [2024-11-28 08:29:08.396708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:11.303 [2024-11-28 08:29:08.403576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.303 [2024-11-28 08:29:08.403767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.303 [2024-11-28 08:29:08.403783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:11.303 [2024-11-28 08:29:08.410331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.303 [2024-11-28 08:29:08.410646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.303 [2024-11-28 08:29:08.410662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:11.303 [2024-11-28 08:29:08.418396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.303 [2024-11-28 08:29:08.418677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.303 [2024-11-28 08:29:08.418700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:11.303 [2024-11-28 08:29:08.428752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.303 [2024-11-28 08:29:08.429084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.303 [2024-11-28 08:29:08.429101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:11.303 [2024-11-28 08:29:08.437397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.303 [2024-11-28 08:29:08.437629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.303 [2024-11-28 08:29:08.437645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:11.303 [2024-11-28 08:29:08.444412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.303 [2024-11-28 08:29:08.444747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.303 [2024-11-28 08:29:08.444764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:11.303 [2024-11-28 08:29:08.454030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.303 [2024-11-28 08:29:08.454412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.303 [2024-11-28 08:29:08.454429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:11.303 [2024-11-28 08:29:08.464645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.303 [2024-11-28 08:29:08.464948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.303 [2024-11-28 08:29:08.464965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:11.303 [2024-11-28 08:29:08.469717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.303 [2024-11-28 08:29:08.469904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.303 [2024-11-28 08:29:08.469920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:11.303 [2024-11-28 08:29:08.476988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.303 [2024-11-28 08:29:08.477284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.303 [2024-11-28 08:29:08.477299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:11.303 [2024-11-28 08:29:08.486218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.303 [2024-11-28 08:29:08.486543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.303 [2024-11-28 08:29:08.486560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:11.303 [2024-11-28 08:29:08.496288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.304 [2024-11-28 08:29:08.496476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.304 [2024-11-28 08:29:08.496492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:11.304 [2024-11-28 08:29:08.501368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.304 [2024-11-28 08:29:08.501557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.304 [2024-11-28 08:29:08.501573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:11.304 [2024-11-28 08:29:08.510848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.304 [2024-11-28 08:29:08.511039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.304 [2024-11-28 08:29:08.511054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:11.304 [2024-11-28 08:29:08.520490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.304 [2024-11-28 08:29:08.520794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.304 [2024-11-28 08:29:08.520810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:11.304 [2024-11-28 08:29:08.529473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.304 [2024-11-28 08:29:08.529531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.304 [2024-11-28 08:29:08.529545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:11.304 [2024-11-28 08:29:08.538395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.304 [2024-11-28 08:29:08.538584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.304 [2024-11-28 08:29:08.538603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:11.304 [2024-11-28 08:29:08.547089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.304 [2024-11-28 08:29:08.547401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.304 [2024-11-28 08:29:08.547417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:11.304 [2024-11-28 08:29:08.556186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.304 [2024-11-28 08:29:08.556421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.304 [2024-11-28 08:29:08.556438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:11.304 [2024-11-28 08:29:08.565813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.304 [2024-11-28 08:29:08.566045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.304 [2024-11-28 08:29:08.566061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:11.304 [2024-11-28 08:29:08.574873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.304 [2024-11-28 08:29:08.575227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.304 [2024-11-28 08:29:08.575244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:11.304 [2024-11-28 08:29:08.584957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.304 [2024-11-28 08:29:08.585278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.304 [2024-11-28 08:29:08.585294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:11.566 [2024-11-28 08:29:08.591527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.566 [2024-11-28 08:29:08.591716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.566 [2024-11-28 08:29:08.591732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:11.566 [2024-11-28 08:29:08.599216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.566 [2024-11-28 08:29:08.599546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.566 [2024-11-28 08:29:08.599562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:11.566 [2024-11-28 08:29:08.607509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.566 [2024-11-28 08:29:08.607710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.566 [2024-11-28 08:29:08.607726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:11.566 [2024-11-28 08:29:08.617863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.566 [2024-11-28 08:29:08.618153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.566 [2024-11-28 08:29:08.618175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:11.566 [2024-11-28 08:29:08.625633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.566 [2024-11-28 08:29:08.625946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.566 [2024-11-28 08:29:08.625963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:11.566 [2024-11-28 08:29:08.632539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.566 [2024-11-28 08:29:08.632864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.566 [2024-11-28 08:29:08.632881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:11.566 [2024-11-28 08:29:08.641080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.566 [2024-11-28 08:29:08.641311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.566 [2024-11-28 08:29:08.641327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:11.566 [2024-11-28 08:29:08.649452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.566 [2024-11-28 08:29:08.649782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.566 [2024-11-28 08:29:08.649799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:11.566 [2024-11-28 08:29:08.658517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.566 [2024-11-28 08:29:08.658858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.566 [2024-11-28 08:29:08.658875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:11.566 [2024-11-28 08:29:08.668067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.566 [2024-11-28 08:29:08.668365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.566 [2024-11-28 08:29:08.668382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:11.566 [2024-11-28 08:29:08.677686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.566 [2024-11-28 08:29:08.678006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.566 [2024-11-28 08:29:08.678023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:11.566 [2024-11-28 08:29:08.688566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.566 [2024-11-28 08:29:08.688862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.566 [2024-11-28 08:29:08.688879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:11.566 [2024-11-28 08:29:08.697956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.566 [2024-11-28 08:29:08.698273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.566 [2024-11-28 08:29:08.698290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:11.566 [2024-11-28 08:29:08.707906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.566 [2024-11-28 08:29:08.708342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.566 [2024-11-28 08:29:08.708360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:11.566 [2024-11-28 08:29:08.714609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.566 [2024-11-28 08:29:08.714819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.566 [2024-11-28 08:29:08.714835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:11.566 [2024-11-28 08:29:08.723789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.566 [2024-11-28 08:29:08.724083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.566 [2024-11-28 08:29:08.724099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:11.566 [2024-11-28 08:29:08.732495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.566 [2024-11-28 08:29:08.732773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.566 [2024-11-28 08:29:08.732789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:11.566 [2024-11-28 08:29:08.740817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.566 [2024-11-28 08:29:08.741244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.566 [2024-11-28 08:29:08.741262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:11.566 [2024-11-28 08:29:08.749231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.566 [2024-11-28 08:29:08.749451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.566 [2024-11-28 08:29:08.749467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:11.566 [2024-11-28 08:29:08.758305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.566 [2024-11-28 08:29:08.758634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.566 [2024-11-28 08:29:08.758650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:11.566 [2024-11-28 08:29:08.767799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.567 [2024-11-28 08:29:08.768104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.567 [2024-11-28 08:29:08.768124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:11.567 [2024-11-28 08:29:08.776876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.567 [2024-11-28 08:29:08.777216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.567 [2024-11-28 08:29:08.777232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:11.567 [2024-11-28 08:29:08.786458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.567 [2024-11-28 08:29:08.786781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.567 [2024-11-28 08:29:08.786798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:11.567 [2024-11-28 08:29:08.797350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.567 [2024-11-28 08:29:08.797656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.567 [2024-11-28 08:29:08.797673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:11.567 [2024-11-28 08:29:08.804564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.567 [2024-11-28 08:29:08.804886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.567 [2024-11-28 08:29:08.804903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:11.567 3987.00 IOPS, 498.38 MiB/s [2024-11-28T07:29:08.856Z] [2024-11-28 08:29:08.814904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.567 [2024-11-28 08:29:08.815209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.567 [2024-11-28 08:29:08.815225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:11.567 [2024-11-28 08:29:08.826641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.567 [2024-11-28 08:29:08.827009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.567 [2024-11-28 08:29:08.827025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:11.567 [2024-11-28 08:29:08.838769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.567 [2024-11-28 08:29:08.839004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.567 [2024-11-28 08:29:08.839020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:11.567 [2024-11-28 08:29:08.850353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.567 [2024-11-28 08:29:08.850685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.567 [2024-11-28 08:29:08.850702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:11.829 [2024-11-28 08:29:08.861736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.829 [2024-11-28 08:29:08.861984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.829 [2024-11-28 08:29:08.862000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:11.829 [2024-11-28 08:29:08.872810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.829 [2024-11-28 08:29:08.873148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.829 [2024-11-28 08:29:08.873170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:11.829 [2024-11-28 08:29:08.884938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.829 [2024-11-28 08:29:08.885138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.829 [2024-11-28 08:29:08.885154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:11.829 [2024-11-28 08:29:08.896934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.829 [2024-11-28 08:29:08.897259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.829 [2024-11-28 08:29:08.897276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:11.829 [2024-11-28 08:29:08.907880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.829 [2024-11-28 08:29:08.908182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.829 [2024-11-28 08:29:08.908198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:11.829 [2024-11-28 08:29:08.919243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.829 [2024-11-28 08:29:08.919579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.829 [2024-11-28 08:29:08.919596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:11.829 [2024-11-28 08:29:08.931774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.829 [2024-11-28 08:29:08.932014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.829 [2024-11-28 08:29:08.932030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:11.829 [2024-11-28 08:29:08.941280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.829 [2024-11-28 08:29:08.941570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.829 [2024-11-28 08:29:08.941588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:11.829 [2024-11-28 08:29:08.949681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.829 [2024-11-28 08:29:08.949870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.829 [2024-11-28 08:29:08.949887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:11.829 [2024-11-28 08:29:08.958706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.829 [2024-11-28 08:29:08.958937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.829 [2024-11-28 08:29:08.958953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:11.829 [2024-11-28 08:29:08.965475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.829 [2024-11-28 08:29:08.965665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.829 [2024-11-28 08:29:08.965681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:11.829 [2024-11-28 08:29:08.971536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.829 [2024-11-28 08:29:08.971725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.829 [2024-11-28 08:29:08.971741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:11.829 [2024-11-28 08:29:08.979303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.829 [2024-11-28 08:29:08.979711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.829 [2024-11-28 08:29:08.979728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:11.829 [2024-11-28 08:29:08.986979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.829 [2024-11-28 08:29:08.987289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.829 [2024-11-28 08:29:08.987307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:11.829 [2024-11-28 08:29:08.995994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.829 [2024-11-28 08:29:08.996326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.829 [2024-11-28 08:29:08.996343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:11.829 [2024-11-28 08:29:09.006917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.829 [2024-11-28 08:29:09.007251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.829 [2024-11-28 08:29:09.007268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:11.829 [2024-11-28 08:29:09.014503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.829 [2024-11-28 08:29:09.014811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.829 [2024-11-28 08:29:09.014828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:11.829 [2024-11-28 08:29:09.021116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.829 [2024-11-28 08:29:09.021312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.829 [2024-11-28 08:29:09.021332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:11.829 [2024-11-28 08:29:09.028285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.829 [2024-11-28 08:29:09.028476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.829 [2024-11-28 08:29:09.028493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:11.829 [2024-11-28 08:29:09.035422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.830 [2024-11-28 08:29:09.035614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.830 [2024-11-28 08:29:09.035630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:11.830 [2024-11-28 08:29:09.043254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.830 [2024-11-28 08:29:09.043445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.830 [2024-11-28 08:29:09.043462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:11.830 [2024-11-28 08:29:09.050273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.830 [2024-11-28 08:29:09.050465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.830 [2024-11-28 08:29:09.050481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:11.830 [2024-11-28 08:29:09.058563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.830 [2024-11-28 08:29:09.058884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.830 [2024-11-28 08:29:09.058901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:11.830 [2024-11-28 08:29:09.064462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.830 [2024-11-28 08:29:09.064771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.830 [2024-11-28 08:29:09.064788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:11.830 [2024-11-28 08:29:09.072885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.830 [2024-11-28 08:29:09.073233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.830 [2024-11-28 08:29:09.073250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:11.830 [2024-11-28 08:29:09.080208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.830 [2024-11-28 08:29:09.080443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.830 [2024-11-28 08:29:09.080467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:11.830 [2024-11-28 08:29:09.086535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.830 [2024-11-28 08:29:09.086727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.830 [2024-11-28 08:29:09.086744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:11.830 [2024-11-28 08:29:09.093785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.830 [2024-11-28 08:29:09.094078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.830 [2024-11-28 08:29:09.094095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:11.830 [2024-11-28 08:29:09.103127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.830 [2024-11-28 08:29:09.103502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.830 [2024-11-28 08:29:09.103519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:11.830 [2024-11-28 08:29:09.111535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.830 [2024-11-28 08:29:09.111794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.830 [2024-11-28 08:29:09.111818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:11.830 [2024-11-28 08:29:09.115288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:11.830 [2024-11-28 08:29:09.115478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:11.830 [2024-11-28 08:29:09.115494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.091 [2024-11-28 08:29:09.124077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.092 [2024-11-28 08:29:09.124293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.092 [2024-11-28 08:29:09.124310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.092 [2024-11-28 08:29:09.129934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.092 [2024-11-28 08:29:09.130133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.092 [2024-11-28 08:29:09.130150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.092 [2024-11-28 08:29:09.133746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.092 [2024-11-28 08:29:09.133934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.092 [2024-11-28 08:29:09.133950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.092 [2024-11-28 08:29:09.137311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.092 [2024-11-28 08:29:09.137502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.092 [2024-11-28 08:29:09.137519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.092 [2024-11-28 08:29:09.141014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.092 [2024-11-28 08:29:09.141309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.092 [2024-11-28 08:29:09.141333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.092 [2024-11-28 08:29:09.144685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.092 [2024-11-28 08:29:09.144873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.092 [2024-11-28 08:29:09.144889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.092 [2024-11-28 08:29:09.148270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.092 [2024-11-28 08:29:09.148456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.092 [2024-11-28 08:29:09.148473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.092 [2024-11-28 08:29:09.151954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.092 [2024-11-28 08:29:09.152141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.092 [2024-11-28 08:29:09.152164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.092 [2024-11-28 08:29:09.156840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.092 [2024-11-28 08:29:09.157083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.092 [2024-11-28 08:29:09.157100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.092 [2024-11-28 08:29:09.163048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.092 [2024-11-28 08:29:09.163241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.092 [2024-11-28 08:29:09.163257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.092 [2024-11-28 08:29:09.167152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.092 [2024-11-28 08:29:09.167347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.092 [2024-11-28 08:29:09.167363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.092 [2024-11-28 08:29:09.172307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.092 [2024-11-28 08:29:09.172496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.092 [2024-11-28 08:29:09.172512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.092 [2024-11-28 08:29:09.180999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.092 [2024-11-28 08:29:09.181328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.092 [2024-11-28 08:29:09.181348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.092 [2024-11-28 08:29:09.188909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.092 [2024-11-28 08:29:09.189234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.092 [2024-11-28 08:29:09.189251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.092 [2024-11-28 08:29:09.193923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.092 [2024-11-28 08:29:09.194142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.092 [2024-11-28 08:29:09.194163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.092 [2024-11-28 08:29:09.204461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.092 [2024-11-28 08:29:09.204774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.092 [2024-11-28 08:29:09.204791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.092 [2024-11-28 08:29:09.212723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.092 [2024-11-28 08:29:09.213015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.092 [2024-11-28 08:29:09.213032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.092 [2024-11-28 08:29:09.222234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.092 [2024-11-28 08:29:09.222578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.092 [2024-11-28 08:29:09.222595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.092 [2024-11-28 08:29:09.228632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.092 [2024-11-28 08:29:09.228819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.092 [2024-11-28 08:29:09.228835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.092 [2024-11-28 08:29:09.236249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.092 [2024-11-28 08:29:09.236600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.092 [2024-11-28 08:29:09.236617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.092 [2024-11-28 08:29:09.245525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.092 [2024-11-28 08:29:09.245843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.092 [2024-11-28 08:29:09.245861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.092 [2024-11-28 08:29:09.254315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.092 [2024-11-28 08:29:09.254738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.092 [2024-11-28 08:29:09.254755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.092 [2024-11-28 08:29:09.264027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.092 [2024-11-28 08:29:09.264437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.092 [2024-11-28 08:29:09.264455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.092 [2024-11-28 08:29:09.272242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.092 [2024-11-28 08:29:09.272652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.092 [2024-11-28 08:29:09.272669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.092 [2024-11-28 08:29:09.279951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.092 [2024-11-28 08:29:09.280141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.092 [2024-11-28 08:29:09.280157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.092 [2024-11-28 08:29:09.288718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.093 [2024-11-28 08:29:09.289049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.093 [2024-11-28 08:29:09.289065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.093 [2024-11-28 08:29:09.297776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.093 [2024-11-28 08:29:09.298217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.093 [2024-11-28 08:29:09.298236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.093 [2024-11-28 08:29:09.305380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.093 [2024-11-28 08:29:09.305699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.093 [2024-11-28 08:29:09.305715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.093 [2024-11-28 08:29:09.314595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.093 [2024-11-28 08:29:09.314944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.093 [2024-11-28 08:29:09.314961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.093 [2024-11-28 08:29:09.322193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.093 [2024-11-28 08:29:09.322603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.093 [2024-11-28 08:29:09.322621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.093 [2024-11-28 08:29:09.331958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.093 [2024-11-28 08:29:09.332294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.093 [2024-11-28 08:29:09.332311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.093 [2024-11-28 08:29:09.341718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.093 [2024-11-28 08:29:09.342015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.093 [2024-11-28 08:29:09.342033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.093 [2024-11-28 08:29:09.351072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.093 [2024-11-28 08:29:09.351343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.093 [2024-11-28 08:29:09.351360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.093 [2024-11-28 08:29:09.360950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.093 [2024-11-28 08:29:09.361375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.093 [2024-11-28 08:29:09.361393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.093 [2024-11-28 08:29:09.372323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.093 [2024-11-28 08:29:09.372535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.093 [2024-11-28 08:29:09.372551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.356 [2024-11-28 08:29:09.382433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.356 [2024-11-28 08:29:09.382859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.356 [2024-11-28 08:29:09.382876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.356 [2024-11-28 08:29:09.392959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.356 [2024-11-28 08:29:09.393262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.356 [2024-11-28 08:29:09.393279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.356 [2024-11-28 08:29:09.403284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.356 [2024-11-28 08:29:09.403519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.356 [2024-11-28 08:29:09.403534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.356 [2024-11-28 08:29:09.409885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.356 [2024-11-28 08:29:09.410186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.357 [2024-11-28 08:29:09.410207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.357 [2024-11-28 08:29:09.419807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.357 [2024-11-28 08:29:09.420134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.357 [2024-11-28 08:29:09.420151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.357 [2024-11-28 08:29:09.426526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.357 [2024-11-28 08:29:09.426856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.357 [2024-11-28 08:29:09.426873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.357 [2024-11-28 08:29:09.433647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.357 [2024-11-28 08:29:09.433837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.357 [2024-11-28 08:29:09.433853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.357 [2024-11-28 08:29:09.437444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.357 [2024-11-28 08:29:09.437632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.357 [2024-11-28 08:29:09.437649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.357 [2024-11-28 08:29:09.445427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.357 [2024-11-28 08:29:09.445734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.357 [2024-11-28 08:29:09.445751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.357 [2024-11-28 08:29:09.453867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.357 [2024-11-28 08:29:09.454184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.357 [2024-11-28 08:29:09.454201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.357 [2024-11-28 08:29:09.463428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.357 [2024-11-28 08:29:09.463638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.357 [2024-11-28 08:29:09.463653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.357 [2024-11-28 08:29:09.473871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.357 [2024-11-28 08:29:09.473922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.357 [2024-11-28 08:29:09.473937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.357 [2024-11-28 08:29:09.485348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.357 [2024-11-28 08:29:09.485571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.357 [2024-11-28 08:29:09.485588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.357 [2024-11-28 08:29:09.497526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.357 [2024-11-28 08:29:09.497760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.357 [2024-11-28 08:29:09.497776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.357 [2024-11-28 08:29:09.508694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.357 [2024-11-28 08:29:09.508893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.357 [2024-11-28 08:29:09.508909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.357 [2024-11-28 08:29:09.520405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.357 [2024-11-28 08:29:09.520723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.357 [2024-11-28 08:29:09.520739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.357 [2024-11-28 08:29:09.529950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.357 [2024-11-28 08:29:09.530030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.357 [2024-11-28 08:29:09.530045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.357 [2024-11-28 08:29:09.539891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.357 [2024-11-28 08:29:09.540091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.357 [2024-11-28 08:29:09.540107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.357 [2024-11-28 08:29:09.544229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.357 [2024-11-28 08:29:09.544408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.357 [2024-11-28 08:29:09.544424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.357 [2024-11-28 08:29:09.553659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.357 [2024-11-28 08:29:09.553963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.357 [2024-11-28 08:29:09.553982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.357 [2024-11-28 08:29:09.560416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.357 [2024-11-28 08:29:09.560633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.357 [2024-11-28 08:29:09.560650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.357 [2024-11-28 08:29:09.568462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.357 [2024-11-28 08:29:09.568762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.357 [2024-11-28 08:29:09.568779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.357 [2024-11-28 08:29:09.577226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.357 [2024-11-28 08:29:09.577608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.357 [2024-11-28 08:29:09.577625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.357 [2024-11-28 08:29:09.585917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.357 [2024-11-28 08:29:09.586077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.357 [2024-11-28 08:29:09.586094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.357 [2024-11-28 08:29:09.596157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.357 [2024-11-28 08:29:09.596418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.357 [2024-11-28 08:29:09.596435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.357 [2024-11-28 08:29:09.603851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.357 [2024-11-28 08:29:09.604130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.357 [2024-11-28 08:29:09.604147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.357 [2024-11-28 08:29:09.612151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.357 [2024-11-28 08:29:09.612430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.357 [2024-11-28 08:29:09.612446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.357 [2024-11-28 08:29:09.619103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.357 [2024-11-28 08:29:09.619286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.357 [2024-11-28 08:29:09.619302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.357 [2024-11-28 08:29:09.625706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.357 [2024-11-28 08:29:09.625884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.357 [2024-11-28 08:29:09.625900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.357 [2024-11-28 08:29:09.632046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.357 [2024-11-28 08:29:09.632338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.357 [2024-11-28 08:29:09.632358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.357 [2024-11-28 08:29:09.638191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.358 [2024-11-28 08:29:09.638376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.358 [2024-11-28 08:29:09.638392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.619 [2024-11-28 08:29:09.644911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.619 [2024-11-28 08:29:09.645088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.619 [2024-11-28 08:29:09.645104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.619 [2024-11-28 08:29:09.653942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.619 [2024-11-28 08:29:09.654155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.619 [2024-11-28 08:29:09.654177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.619 [2024-11-28 08:29:09.660377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.619 [2024-11-28 08:29:09.660712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.619 [2024-11-28 08:29:09.660729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.619 [2024-11-28 08:29:09.666201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.619 [2024-11-28 08:29:09.666461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.619 [2024-11-28 08:29:09.666478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.619 [2024-11-28 08:29:09.674622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.619 [2024-11-28 08:29:09.674925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.619 [2024-11-28 08:29:09.674943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.619 [2024-11-28 08:29:09.682181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.619 [2024-11-28 08:29:09.682356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.619 [2024-11-28 08:29:09.682372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.619 [2024-11-28 08:29:09.690399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.620 [2024-11-28 08:29:09.690715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.620 [2024-11-28 08:29:09.690731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.620 [2024-11-28 08:29:09.699213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.620 [2024-11-28 08:29:09.699383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.620 [2024-11-28 08:29:09.699399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.620 [2024-11-28 08:29:09.708110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.620 [2024-11-28 08:29:09.708469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.620 [2024-11-28 08:29:09.708487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.620 [2024-11-28 08:29:09.714454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.620 [2024-11-28 08:29:09.714623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.620 [2024-11-28 08:29:09.714639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.620 [2024-11-28 08:29:09.718965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.620 [2024-11-28 08:29:09.719130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.620 [2024-11-28 08:29:09.719147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.620 [2024-11-28 08:29:09.725299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.620 [2024-11-28 08:29:09.725482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.620 [2024-11-28 08:29:09.725498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.620 [2024-11-28 08:29:09.731447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.620 [2024-11-28 08:29:09.731699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.620 [2024-11-28 08:29:09.731715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.620 [2024-11-28 08:29:09.742391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.620 [2024-11-28 08:29:09.742772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.620 [2024-11-28 08:29:09.742789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.620 [2024-11-28 08:29:09.753183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.620 [2024-11-28 08:29:09.753552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.620 [2024-11-28 08:29:09.753568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.620 [2024-11-28 08:29:09.764688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.620 [2024-11-28 08:29:09.765076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.620 [2024-11-28 08:29:09.765093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.620 [2024-11-28 08:29:09.775889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.620 [2024-11-28 08:29:09.776195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.620 [2024-11-28 08:29:09.776213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:12.620 [2024-11-28 08:29:09.786554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.620 [2024-11-28 08:29:09.786908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.620 [2024-11-28 08:29:09.786925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:12.620 [2024-11-28 08:29:09.796903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.620 [2024-11-28 08:29:09.797150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.620 [2024-11-28 08:29:09.797172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:12.620 [2024-11-28 08:29:09.808472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2234710) with pdu=0x200016eff3c8 00:30:12.620 [2024-11-28 08:29:09.808790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:12.620 [2024-11-28 08:29:09.808806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:12.620 3865.50 IOPS, 483.19 MiB/s 00:30:12.620 Latency(us) 00:30:12.620 [2024-11-28T07:29:09.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.620 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:12.620 nvme0n1 : 2.01 3863.06 482.88 0.00 0.00 4134.86 1624.75 12069.55 00:30:12.620 [2024-11-28T07:29:09.909Z] =================================================================================================================== 00:30:12.620 [2024-11-28T07:29:09.909Z] Total : 3863.06 482.88 0.00 0.00 4134.86 1624.75 12069.55 00:30:12.620 { 00:30:12.620 "results": [ 00:30:12.620 { 00:30:12.620 "job": "nvme0n1", 00:30:12.620 "core_mask": "0x2", 00:30:12.620 "workload": "randwrite", 00:30:12.620 "status": "finished", 00:30:12.620 "queue_depth": 16, 00:30:12.620 "io_size": 131072, 00:30:12.620 "runtime": 2.006184, 00:30:12.620 "iops": 3863.0554326023935, 00:30:12.620 "mibps": 482.8819290752992, 00:30:12.620 "io_failed": 0, 00:30:12.620 "io_timeout": 0, 00:30:12.620 "avg_latency_us": 4134.863552688173, 00:30:12.620 "min_latency_us": 1624.7466666666667, 00:30:12.620 "max_latency_us": 12069.546666666667 00:30:12.620 } 00:30:12.620 ], 00:30:12.620 "core_count": 1 00:30:12.620 } 00:30:12.620 08:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:12.620 08:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:12.620 08:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:12.620 | .driver_specific 00:30:12.620 | .nvme_error 00:30:12.620 | .status_code 00:30:12.620 | .command_transient_transport_error' 00:30:12.620 08:29:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:12.882 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 250 > 0 )) 00:30:12.882 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2148338 00:30:12.882 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2148338 ']' 00:30:12.882 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2148338 00:30:12.882 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:12.882 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:12.882 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2148338 00:30:12.882 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:12.882 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:12.882 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2148338' 00:30:12.882 killing process with pid 2148338 00:30:12.882 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2148338 00:30:12.882 Received shutdown signal, test time was about 2.000000 seconds 00:30:12.882 00:30:12.882 Latency(us) 00:30:12.882 [2024-11-28T07:29:10.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.882 [2024-11-28T07:29:10.171Z] =================================================================================================================== 00:30:12.882 [2024-11-28T07:29:10.171Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:12.882 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2148338 00:30:13.144 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2145934 00:30:13.144 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2145934 ']' 00:30:13.144 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2145934 00:30:13.144 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:13.144 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:13.144 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2145934 00:30:13.144 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:13.144 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:13.144 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2145934' 00:30:13.144 killing process with pid 2145934 00:30:13.144 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2145934 00:30:13.144 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2145934 00:30:13.144 00:30:13.144 real 0m16.570s 00:30:13.144 user 0m32.823s 00:30:13.144 sys 0m3.576s 00:30:13.144 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:13.144 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:13.144 ************************************ 00:30:13.144 END TEST nvmf_digest_error 00:30:13.144 ************************************ 00:30:13.144 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:30:13.144 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:30:13.144 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:13.144 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:30:13.144 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:13.144 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:30:13.144 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:13.144 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:13.144 rmmod nvme_tcp 00:30:13.405 rmmod nvme_fabrics 00:30:13.405 rmmod nvme_keyring 00:30:13.405 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:13.405 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:30:13.405 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:30:13.405 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2145934 ']' 00:30:13.405 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2145934 00:30:13.405 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2145934 ']' 00:30:13.405 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2145934 00:30:13.405 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2145934) - No such process 00:30:13.405 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2145934 is not found' 00:30:13.405 Process with pid 2145934 is not found 00:30:13.405 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:13.405 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:13.405 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:13.405 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:30:13.405 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:30:13.405 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:13.405 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:30:13.405 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:13.405 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:13.405 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.405 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:13.405 08:29:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.318 08:29:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:15.318 00:30:15.318 real 0m43.536s 00:30:15.318 user 1m8.373s 00:30:15.318 sys 0m13.176s 00:30:15.318 08:29:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:15.318 08:29:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:15.318 ************************************ 00:30:15.318 END TEST nvmf_digest 00:30:15.318 ************************************ 00:30:15.579 08:29:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:30:15.579 08:29:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:30:15.579 08:29:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:30:15.579 08:29:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:15.579 08:29:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:15.579 08:29:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:15.579 08:29:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.579 ************************************ 00:30:15.579 START TEST nvmf_bdevperf 00:30:15.579 ************************************ 00:30:15.579 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:15.579 * Looking for test storage... 00:30:15.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:15.579 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:15.579 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:30:15.579 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:15.579 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:15.579 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:15.579 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:15.579 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:15.579 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:30:15.579 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:30:15.579 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:30:15.580 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:30:15.580 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:30:15.580 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:30:15.580 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:30:15.580 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:15.580 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:30:15.580 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:30:15.580 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:15.580 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:15.580 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:30:15.580 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:30:15.580 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:15.580 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:30:15.580 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:15.580 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:30:15.580 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:30:15.580 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:15.580 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:30:15.580 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:15.580 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:15.580 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:15.580 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:30:15.580 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:15.580 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:15.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:15.580 --rc genhtml_branch_coverage=1 00:30:15.580 --rc genhtml_function_coverage=1 00:30:15.580 --rc genhtml_legend=1 00:30:15.580 --rc geninfo_all_blocks=1 00:30:15.580 --rc geninfo_unexecuted_blocks=1 00:30:15.580 00:30:15.580 ' 00:30:15.580 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:15.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:15.580 --rc genhtml_branch_coverage=1 00:30:15.580 --rc genhtml_function_coverage=1 00:30:15.580 --rc genhtml_legend=1 00:30:15.580 --rc geninfo_all_blocks=1 00:30:15.580 --rc geninfo_unexecuted_blocks=1 00:30:15.580 00:30:15.580 ' 00:30:15.580 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:15.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:15.580 --rc genhtml_branch_coverage=1 00:30:15.580 --rc genhtml_function_coverage=1 00:30:15.580 --rc genhtml_legend=1 00:30:15.580 --rc geninfo_all_blocks=1 00:30:15.580 --rc geninfo_unexecuted_blocks=1 00:30:15.580 00:30:15.580 ' 00:30:15.580 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:15.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:15.580 --rc genhtml_branch_coverage=1 00:30:15.580 --rc genhtml_function_coverage=1 00:30:15.580 --rc genhtml_legend=1 00:30:15.580 --rc geninfo_all_blocks=1 00:30:15.580 --rc geninfo_unexecuted_blocks=1 00:30:15.580 00:30:15.580 ' 00:30:15.580 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:15.580 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:15.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:15.842 08:29:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:23.986 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:23.987 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:23.987 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:23.987 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:23.987 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:23.987 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:23.987 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:30:23.987 00:30:23.987 --- 10.0.0.2 ping statistics --- 00:30:23.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.987 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:23.987 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:23.987 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:30:23.987 00:30:23.987 --- 10.0.0.1 ping statistics --- 00:30:23.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.987 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:23.987 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:23.988 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:23.988 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:23.988 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:30:23.988 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:23.988 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:23.988 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:23.988 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:23.988 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2153352 00:30:23.988 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2153352 00:30:23.988 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:23.988 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2153352 ']' 00:30:23.988 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:23.988 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:23.988 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:23.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:23.988 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:23.988 08:29:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:23.988 [2024-11-28 08:29:20.487913] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:30:23.988 [2024-11-28 08:29:20.487983] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:23.988 [2024-11-28 08:29:20.588072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:23.988 [2024-11-28 08:29:20.639634] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:23.988 [2024-11-28 08:29:20.639690] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:23.988 [2024-11-28 08:29:20.639699] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:23.988 [2024-11-28 08:29:20.639706] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:23.988 [2024-11-28 08:29:20.639712] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:23.988 [2024-11-28 08:29:20.641569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:23.988 [2024-11-28 08:29:20.641729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:23.988 [2024-11-28 08:29:20.641730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:24.249 08:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:24.249 08:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:30:24.249 08:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:24.249 08:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:24.249 08:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:24.249 08:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:24.249 08:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:24.249 08:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.249 08:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:24.249 [2024-11-28 08:29:21.366708] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:24.249 08:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.249 08:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:24.249 08:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.249 08:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:24.249 Malloc0 00:30:24.249 08:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.249 08:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:24.249 08:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.249 08:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:24.249 08:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.249 08:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:24.249 08:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.249 08:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:24.249 08:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.249 08:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:24.249 08:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.249 08:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:24.249 [2024-11-28 08:29:21.443437] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:24.249 08:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.249 08:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:30:24.249 08:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:30:24.249 08:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:30:24.249 08:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:30:24.249 08:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:24.249 08:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:24.249 { 00:30:24.249 "params": { 00:30:24.249 "name": "Nvme$subsystem", 00:30:24.249 "trtype": "$TEST_TRANSPORT", 00:30:24.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:24.249 "adrfam": "ipv4", 00:30:24.249 "trsvcid": "$NVMF_PORT", 00:30:24.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:24.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:24.249 "hdgst": ${hdgst:-false}, 00:30:24.249 "ddgst": ${ddgst:-false} 00:30:24.249 }, 00:30:24.249 "method": "bdev_nvme_attach_controller" 00:30:24.249 } 00:30:24.249 EOF 00:30:24.249 )") 00:30:24.249 08:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:30:24.249 08:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:30:24.249 08:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:30:24.249 08:29:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:24.249 "params": { 00:30:24.249 "name": "Nvme1", 00:30:24.249 "trtype": "tcp", 00:30:24.249 "traddr": "10.0.0.2", 00:30:24.249 "adrfam": "ipv4", 00:30:24.249 "trsvcid": "4420", 00:30:24.249 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:24.249 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:24.249 "hdgst": false, 00:30:24.249 "ddgst": false 00:30:24.249 }, 00:30:24.249 "method": "bdev_nvme_attach_controller" 00:30:24.249 }' 00:30:24.249 [2024-11-28 08:29:21.502203] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:30:24.249 [2024-11-28 08:29:21.502265] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2153458 ] 00:30:24.510 [2024-11-28 08:29:21.595249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:24.510 [2024-11-28 08:29:21.648022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:24.770 Running I/O for 1 seconds... 00:30:25.713 8461.00 IOPS, 33.05 MiB/s 00:30:25.713 Latency(us) 00:30:25.713 [2024-11-28T07:29:23.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:25.713 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:25.713 Verification LBA range: start 0x0 length 0x4000 00:30:25.713 Nvme1n1 : 1.01 8552.57 33.41 0.00 0.00 14882.97 1174.19 13325.65 00:30:25.713 [2024-11-28T07:29:23.002Z] =================================================================================================================== 00:30:25.713 [2024-11-28T07:29:23.002Z] Total : 8552.57 33.41 0.00 0.00 14882.97 1174.19 13325.65 00:30:25.713 08:29:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2153727 00:30:25.713 08:29:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:30:25.713 08:29:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:30:25.713 08:29:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:30:25.713 08:29:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:30:25.713 08:29:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:30:25.713 08:29:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:25.713 08:29:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:25.713 { 00:30:25.713 "params": { 00:30:25.713 "name": "Nvme$subsystem", 00:30:25.713 "trtype": "$TEST_TRANSPORT", 00:30:25.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:25.713 "adrfam": "ipv4", 00:30:25.713 "trsvcid": "$NVMF_PORT", 00:30:25.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:25.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:25.713 "hdgst": ${hdgst:-false}, 00:30:25.713 "ddgst": ${ddgst:-false} 00:30:25.713 }, 00:30:25.713 "method": "bdev_nvme_attach_controller" 00:30:25.713 } 00:30:25.713 EOF 00:30:25.713 )") 00:30:25.713 08:29:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:30:25.713 08:29:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:30:25.713 08:29:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:30:25.713 08:29:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:25.713 "params": { 00:30:25.713 "name": "Nvme1", 00:30:25.713 "trtype": "tcp", 00:30:25.713 "traddr": "10.0.0.2", 00:30:25.713 "adrfam": "ipv4", 00:30:25.713 "trsvcid": "4420", 00:30:25.713 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:25.713 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:25.713 "hdgst": false, 00:30:25.713 "ddgst": false 00:30:25.713 }, 00:30:25.713 "method": "bdev_nvme_attach_controller" 00:30:25.713 }' 00:30:25.975 [2024-11-28 08:29:23.027881] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:30:25.975 [2024-11-28 08:29:23.027956] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2153727 ] 00:30:25.975 [2024-11-28 08:29:23.122717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:25.975 [2024-11-28 08:29:23.176943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:26.236 Running I/O for 15 seconds... 00:30:28.118 9770.00 IOPS, 38.16 MiB/s [2024-11-28T07:29:26.353Z] 10461.50 IOPS, 40.87 MiB/s [2024-11-28T07:29:26.353Z] 08:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2153352 00:30:29.064 08:29:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:30:29.064 [2024-11-28 08:29:25.988776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:92136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.064 [2024-11-28 08:29:25.988816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.064 [2024-11-28 08:29:25.988835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:92144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.064 [2024-11-28 08:29:25.988845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.064 [2024-11-28 08:29:25.988857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:92152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.064 [2024-11-28 08:29:25.988867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.064 [2024-11-28 08:29:25.988878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:92160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.064 [2024-11-28 08:29:25.988888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.064 [2024-11-28 08:29:25.988897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:92168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.064 [2024-11-28 08:29:25.988905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.064 [2024-11-28 08:29:25.988915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.064 [2024-11-28 08:29:25.988924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.064 [2024-11-28 08:29:25.988933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:92184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.064 [2024-11-28 08:29:25.988943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.064 [2024-11-28 08:29:25.988954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.064 [2024-11-28 08:29:25.988963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.064 [2024-11-28 08:29:25.988972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.064 [2024-11-28 08:29:25.988980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.064 [2024-11-28 08:29:25.988996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:92208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.064 [2024-11-28 08:29:25.989004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.064 [2024-11-28 08:29:25.989013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:92216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.064 [2024-11-28 08:29:25.989021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.064 [2024-11-28 08:29:25.989032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:92224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.064 [2024-11-28 08:29:25.989042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.064 [2024-11-28 08:29:25.989052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:92232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.064 [2024-11-28 08:29:25.989061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.064 [2024-11-28 08:29:25.989071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:92240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.064 [2024-11-28 08:29:25.989082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.064 [2024-11-28 08:29:25.989093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:92248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.064 [2024-11-28 08:29:25.989103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.064 [2024-11-28 08:29:25.989112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:92256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.064 [2024-11-28 08:29:25.989121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.064 [2024-11-28 08:29:25.989132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.064 [2024-11-28 08:29:25.989142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.064 [2024-11-28 08:29:25.989152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:92272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.064 [2024-11-28 08:29:25.989166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.064 [2024-11-28 08:29:25.989177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:92280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.064 [2024-11-28 08:29:25.989185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.064 [2024-11-28 08:29:25.989195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:92288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.064 [2024-11-28 08:29:25.989202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.064 [2024-11-28 08:29:25.989212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:92296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.064 [2024-11-28 08:29:25.989220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.064 [2024-11-28 08:29:25.989230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:92304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.064 [2024-11-28 08:29:25.989240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.064 [2024-11-28 08:29:25.989249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:92312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.064 [2024-11-28 08:29:25.989257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.064 [2024-11-28 08:29:25.989266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:92320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.064 [2024-11-28 08:29:25.989274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.064 [2024-11-28 08:29:25.989284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:92328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.064 [2024-11-28 08:29:25.989292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.064 [2024-11-28 08:29:25.989301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.064 [2024-11-28 08:29:25.989309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.064 [2024-11-28 08:29:25.989319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:92344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.064 [2024-11-28 08:29:25.989326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.064 [2024-11-28 08:29:25.989337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.064 [2024-11-28 08:29:25.989344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.064 [2024-11-28 08:29:25.989354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.064 [2024-11-28 08:29:25.989361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.064 [2024-11-28 08:29:25.989371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:91600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.065 [2024-11-28 08:29:25.989378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:91608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.065 [2024-11-28 08:29:25.989395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:91616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.065 [2024-11-28 08:29:25.989412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:91624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.065 [2024-11-28 08:29:25.989429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:91632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.065 [2024-11-28 08:29:25.989447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:91640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.065 [2024-11-28 08:29:25.989465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:92368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.065 [2024-11-28 08:29:25.989482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:92376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.065 [2024-11-28 08:29:25.989499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:92384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.065 [2024-11-28 08:29:25.989516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.065 [2024-11-28 08:29:25.989532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.065 [2024-11-28 08:29:25.989550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.065 [2024-11-28 08:29:25.989566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:92416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.065 [2024-11-28 08:29:25.989583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:92424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.065 [2024-11-28 08:29:25.989601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:92432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.065 [2024-11-28 08:29:25.989618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:92440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.065 [2024-11-28 08:29:25.989634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.065 [2024-11-28 08:29:25.989651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.065 [2024-11-28 08:29:25.989668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.065 [2024-11-28 08:29:25.989686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.065 [2024-11-28 08:29:25.989703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.065 [2024-11-28 08:29:25.989720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.065 [2024-11-28 08:29:25.989736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:92496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.065 [2024-11-28 08:29:25.989753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.065 [2024-11-28 08:29:25.989770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.065 [2024-11-28 08:29:25.989787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.065 [2024-11-28 08:29:25.989803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.065 [2024-11-28 08:29:25.989820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.065 [2024-11-28 08:29:25.989837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.065 [2024-11-28 08:29:25.989853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.065 [2024-11-28 08:29:25.989870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.065 [2024-11-28 08:29:25.989888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.065 [2024-11-28 08:29:25.989905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:92576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.065 [2024-11-28 08:29:25.989921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.065 [2024-11-28 08:29:25.989938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.065 [2024-11-28 08:29:25.989955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.065 [2024-11-28 08:29:25.989971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.065 [2024-11-28 08:29:25.989989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.989998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:29.065 [2024-11-28 08:29:25.990006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.065 [2024-11-28 08:29:25.990015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:91648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.065 [2024-11-28 08:29:25.990023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:91656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:91664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:91672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:91680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:91688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:91704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:91712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:91720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:91744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:91752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:91760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:91768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:91776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:91784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:91792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:91800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:91808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:91816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:91832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:91840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:91848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:91856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:91864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:91880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:91888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:91896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:91904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:91912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:91920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:91928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:91936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:91944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:91952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:91960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.066 [2024-11-28 08:29:25.990704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:91968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.066 [2024-11-28 08:29:25.990711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-11-28 08:29:25.990720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:91976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-11-28 08:29:25.990727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-11-28 08:29:25.990737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:91984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-11-28 08:29:25.990744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-11-28 08:29:25.990756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-11-28 08:29:25.990763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-11-28 08:29:25.990773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:92000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-11-28 08:29:25.990780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-11-28 08:29:25.990789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-11-28 08:29:25.990797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-11-28 08:29:25.990807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:92016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-11-28 08:29:25.990814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-11-28 08:29:25.990823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:92024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-11-28 08:29:25.990830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-11-28 08:29:25.990840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:92032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-11-28 08:29:25.990847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-11-28 08:29:25.990856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:92040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-11-28 08:29:25.990864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-11-28 08:29:25.990874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:92048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-11-28 08:29:25.990881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-11-28 08:29:25.990891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-11-28 08:29:25.990898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-11-28 08:29:25.990908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-11-28 08:29:25.990915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-11-28 08:29:25.990925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:92072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-11-28 08:29:25.990932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-11-28 08:29:25.990941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-11-28 08:29:25.990949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-11-28 08:29:25.990958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:92088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-11-28 08:29:25.990967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-11-28 08:29:25.990977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-11-28 08:29:25.990984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-11-28 08:29:25.990994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:92104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-11-28 08:29:25.991001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-11-28 08:29:25.991011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:92112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-11-28 08:29:25.991018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-11-28 08:29:25.991028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.067 [2024-11-28 08:29:25.991035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-11-28 08:29:25.991043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e51170 is same with the state(6) to be set 00:30:29.067 [2024-11-28 08:29:25.991052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:29.067 [2024-11-28 08:29:25.991058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:29.067 [2024-11-28 08:29:25.991065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92128 len:8 PRP1 0x0 PRP2 0x0 00:30:29.067 [2024-11-28 08:29:25.991073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-11-28 08:29:25.991146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:29.067 [2024-11-28 08:29:25.991253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-11-28 08:29:25.991264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:29.067 [2024-11-28 08:29:25.991272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-11-28 08:29:25.991280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:29.067 [2024-11-28 08:29:25.991287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-11-28 08:29:25.991295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:29.067 [2024-11-28 08:29:25.991303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.067 [2024-11-28 08:29:25.991310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.067 [2024-11-28 08:29:25.994877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.067 [2024-11-28 08:29:25.994898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.067 [2024-11-28 08:29:25.995669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.067 [2024-11-28 08:29:25.995687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.067 [2024-11-28 08:29:25.995699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.067 [2024-11-28 08:29:25.995918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.067 [2024-11-28 08:29:25.996137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.067 [2024-11-28 08:29:25.996146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.067 [2024-11-28 08:29:25.996154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.067 [2024-11-28 08:29:25.996168] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.067 [2024-11-28 08:29:26.009048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.067 [2024-11-28 08:29:26.009656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.067 [2024-11-28 08:29:26.009695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.067 [2024-11-28 08:29:26.009706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.067 [2024-11-28 08:29:26.009944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.067 [2024-11-28 08:29:26.010177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.067 [2024-11-28 08:29:26.010187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.067 [2024-11-28 08:29:26.010195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.067 [2024-11-28 08:29:26.010204] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.067 [2024-11-28 08:29:26.022875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.067 [2024-11-28 08:29:26.023486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.067 [2024-11-28 08:29:26.023525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.067 [2024-11-28 08:29:26.023536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.067 [2024-11-28 08:29:26.023774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.067 [2024-11-28 08:29:26.023996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.067 [2024-11-28 08:29:26.024005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.067 [2024-11-28 08:29:26.024013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.067 [2024-11-28 08:29:26.024021] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.067 [2024-11-28 08:29:26.036713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.068 [2024-11-28 08:29:26.037260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.068 [2024-11-28 08:29:26.037301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.068 [2024-11-28 08:29:26.037314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.068 [2024-11-28 08:29:26.037555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.068 [2024-11-28 08:29:26.037782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.068 [2024-11-28 08:29:26.037792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.068 [2024-11-28 08:29:26.037800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.068 [2024-11-28 08:29:26.037810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.068 [2024-11-28 08:29:26.050498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.068 [2024-11-28 08:29:26.051091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.068 [2024-11-28 08:29:26.051132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.068 [2024-11-28 08:29:26.051143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.068 [2024-11-28 08:29:26.051393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.068 [2024-11-28 08:29:26.051617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.068 [2024-11-28 08:29:26.051626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.068 [2024-11-28 08:29:26.051633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.068 [2024-11-28 08:29:26.051642] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.068 [2024-11-28 08:29:26.064331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.068 [2024-11-28 08:29:26.064927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.068 [2024-11-28 08:29:26.064969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.068 [2024-11-28 08:29:26.064981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.068 [2024-11-28 08:29:26.065231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.068 [2024-11-28 08:29:26.065455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.068 [2024-11-28 08:29:26.065464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.068 [2024-11-28 08:29:26.065471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.068 [2024-11-28 08:29:26.065479] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.068 [2024-11-28 08:29:26.078149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.068 [2024-11-28 08:29:26.078748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.068 [2024-11-28 08:29:26.078792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.068 [2024-11-28 08:29:26.078803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.068 [2024-11-28 08:29:26.079043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.068 [2024-11-28 08:29:26.079277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.068 [2024-11-28 08:29:26.079287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.068 [2024-11-28 08:29:26.079300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.068 [2024-11-28 08:29:26.079308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.068 [2024-11-28 08:29:26.092002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.068 [2024-11-28 08:29:26.092654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.068 [2024-11-28 08:29:26.092701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.068 [2024-11-28 08:29:26.092712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.068 [2024-11-28 08:29:26.092956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.068 [2024-11-28 08:29:26.093191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.068 [2024-11-28 08:29:26.093201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.068 [2024-11-28 08:29:26.093210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.068 [2024-11-28 08:29:26.093218] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.068 [2024-11-28 08:29:26.105905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.068 [2024-11-28 08:29:26.106564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.068 [2024-11-28 08:29:26.106613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.068 [2024-11-28 08:29:26.106625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.068 [2024-11-28 08:29:26.106870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.068 [2024-11-28 08:29:26.107093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.068 [2024-11-28 08:29:26.107102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.068 [2024-11-28 08:29:26.107110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.068 [2024-11-28 08:29:26.107119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.068 [2024-11-28 08:29:26.119817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.068 [2024-11-28 08:29:26.120381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.068 [2024-11-28 08:29:26.120436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.068 [2024-11-28 08:29:26.120450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.068 [2024-11-28 08:29:26.120701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.068 [2024-11-28 08:29:26.120925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.068 [2024-11-28 08:29:26.120936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.068 [2024-11-28 08:29:26.120944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.068 [2024-11-28 08:29:26.120953] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.068 [2024-11-28 08:29:26.133667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.068 [2024-11-28 08:29:26.134296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.068 [2024-11-28 08:29:26.134353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.068 [2024-11-28 08:29:26.134366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.068 [2024-11-28 08:29:26.134616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.068 [2024-11-28 08:29:26.134841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.068 [2024-11-28 08:29:26.134850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.068 [2024-11-28 08:29:26.134858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.068 [2024-11-28 08:29:26.134867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.068 [2024-11-28 08:29:26.147598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.068 [2024-11-28 08:29:26.148269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.068 [2024-11-28 08:29:26.148333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.068 [2024-11-28 08:29:26.148348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.068 [2024-11-28 08:29:26.148603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.069 [2024-11-28 08:29:26.148829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.069 [2024-11-28 08:29:26.148840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.069 [2024-11-28 08:29:26.148848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.069 [2024-11-28 08:29:26.148857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.069 [2024-11-28 08:29:26.161588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.069 [2024-11-28 08:29:26.162288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.069 [2024-11-28 08:29:26.162349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.069 [2024-11-28 08:29:26.162362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.069 [2024-11-28 08:29:26.162615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.069 [2024-11-28 08:29:26.162841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.069 [2024-11-28 08:29:26.162850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.069 [2024-11-28 08:29:26.162859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.069 [2024-11-28 08:29:26.162868] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.069 [2024-11-28 08:29:26.175386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.069 [2024-11-28 08:29:26.175980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.069 [2024-11-28 08:29:26.176009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.069 [2024-11-28 08:29:26.176025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.069 [2024-11-28 08:29:26.176258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.069 [2024-11-28 08:29:26.176480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.069 [2024-11-28 08:29:26.176490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.069 [2024-11-28 08:29:26.176497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.069 [2024-11-28 08:29:26.176505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.069 [2024-11-28 08:29:26.189200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.069 [2024-11-28 08:29:26.189847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.069 [2024-11-28 08:29:26.189910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.069 [2024-11-28 08:29:26.189923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.069 [2024-11-28 08:29:26.190191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.069 [2024-11-28 08:29:26.190418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.069 [2024-11-28 08:29:26.190427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.069 [2024-11-28 08:29:26.190436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.069 [2024-11-28 08:29:26.190445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.069 [2024-11-28 08:29:26.203151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.069 [2024-11-28 08:29:26.203855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.069 [2024-11-28 08:29:26.203918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.069 [2024-11-28 08:29:26.203930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.069 [2024-11-28 08:29:26.204197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.069 [2024-11-28 08:29:26.204424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.069 [2024-11-28 08:29:26.204433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.069 [2024-11-28 08:29:26.204441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.069 [2024-11-28 08:29:26.204450] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.069 [2024-11-28 08:29:26.216955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.069 [2024-11-28 08:29:26.217509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.069 [2024-11-28 08:29:26.217538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.069 [2024-11-28 08:29:26.217547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.069 [2024-11-28 08:29:26.217767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.069 [2024-11-28 08:29:26.217997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.069 [2024-11-28 08:29:26.218007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.069 [2024-11-28 08:29:26.218015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.069 [2024-11-28 08:29:26.218023] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.069 [2024-11-28 08:29:26.230930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.069 [2024-11-28 08:29:26.231522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.069 [2024-11-28 08:29:26.231546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.069 [2024-11-28 08:29:26.231554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.069 [2024-11-28 08:29:26.231774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.069 [2024-11-28 08:29:26.231994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.069 [2024-11-28 08:29:26.232004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.069 [2024-11-28 08:29:26.232012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.069 [2024-11-28 08:29:26.232020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.069 [2024-11-28 08:29:26.244735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.069 [2024-11-28 08:29:26.245441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.069 [2024-11-28 08:29:26.245505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.069 [2024-11-28 08:29:26.245518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.069 [2024-11-28 08:29:26.245771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.069 [2024-11-28 08:29:26.245997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.069 [2024-11-28 08:29:26.246006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.069 [2024-11-28 08:29:26.246015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.069 [2024-11-28 08:29:26.246025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.069 [2024-11-28 08:29:26.258563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.069 [2024-11-28 08:29:26.259230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.069 [2024-11-28 08:29:26.259294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.069 [2024-11-28 08:29:26.259308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.069 [2024-11-28 08:29:26.259563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.069 [2024-11-28 08:29:26.259790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.069 [2024-11-28 08:29:26.259801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.069 [2024-11-28 08:29:26.259818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.069 [2024-11-28 08:29:26.259829] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.069 [2024-11-28 08:29:26.272563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.069 [2024-11-28 08:29:26.273149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.069 [2024-11-28 08:29:26.273221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.069 [2024-11-28 08:29:26.273234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.069 [2024-11-28 08:29:26.273488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.069 [2024-11-28 08:29:26.273714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.069 [2024-11-28 08:29:26.273724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.069 [2024-11-28 08:29:26.273733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.069 [2024-11-28 08:29:26.273743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.069 [2024-11-28 08:29:26.286461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.069 [2024-11-28 08:29:26.287154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.069 [2024-11-28 08:29:26.287229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.069 [2024-11-28 08:29:26.287241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.069 [2024-11-28 08:29:26.287496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.070 [2024-11-28 08:29:26.287721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.070 [2024-11-28 08:29:26.287732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.070 [2024-11-28 08:29:26.287741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.070 [2024-11-28 08:29:26.287749] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.070 [2024-11-28 08:29:26.300253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.070 [2024-11-28 08:29:26.300945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.070 [2024-11-28 08:29:26.301006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.070 [2024-11-28 08:29:26.301018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.070 [2024-11-28 08:29:26.301287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.070 [2024-11-28 08:29:26.301514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.070 [2024-11-28 08:29:26.301523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.070 [2024-11-28 08:29:26.301532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.070 [2024-11-28 08:29:26.301541] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.070 [2024-11-28 08:29:26.314055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.070 [2024-11-28 08:29:26.314720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.070 [2024-11-28 08:29:26.314782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.070 [2024-11-28 08:29:26.314795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.070 [2024-11-28 08:29:26.315049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.070 [2024-11-28 08:29:26.315291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.070 [2024-11-28 08:29:26.315301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.070 [2024-11-28 08:29:26.315310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.070 [2024-11-28 08:29:26.315319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.070 [2024-11-28 08:29:26.328022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.070 [2024-11-28 08:29:26.328629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.070 [2024-11-28 08:29:26.328693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.070 [2024-11-28 08:29:26.328707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.070 [2024-11-28 08:29:26.328961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.070 [2024-11-28 08:29:26.329202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.070 [2024-11-28 08:29:26.329227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.070 [2024-11-28 08:29:26.329235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.070 [2024-11-28 08:29:26.329244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.070 [2024-11-28 08:29:26.341975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.070 [2024-11-28 08:29:26.342701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.070 [2024-11-28 08:29:26.342763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.070 [2024-11-28 08:29:26.342775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.070 [2024-11-28 08:29:26.343029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.070 [2024-11-28 08:29:26.343269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.070 [2024-11-28 08:29:26.343280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.070 [2024-11-28 08:29:26.343289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.070 [2024-11-28 08:29:26.343298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.332 [2024-11-28 08:29:26.355813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.332 [2024-11-28 08:29:26.356481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.332 [2024-11-28 08:29:26.356543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.332 [2024-11-28 08:29:26.356564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.332 [2024-11-28 08:29:26.356817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.332 [2024-11-28 08:29:26.357042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.332 [2024-11-28 08:29:26.357052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.332 [2024-11-28 08:29:26.357060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.332 [2024-11-28 08:29:26.357069] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.332 9278.00 IOPS, 36.24 MiB/s [2024-11-28T07:29:26.621Z] [2024-11-28 08:29:26.369800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.332 [2024-11-28 08:29:26.370476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.332 [2024-11-28 08:29:26.370538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.332 [2024-11-28 08:29:26.370551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.332 [2024-11-28 08:29:26.370804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.332 [2024-11-28 08:29:26.371030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.332 [2024-11-28 08:29:26.371039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.332 [2024-11-28 08:29:26.371048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.332 [2024-11-28 08:29:26.371057] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.332 [2024-11-28 08:29:26.383788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.332 [2024-11-28 08:29:26.384364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.332 [2024-11-28 08:29:26.384427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.332 [2024-11-28 08:29:26.384441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.332 [2024-11-28 08:29:26.384697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.332 [2024-11-28 08:29:26.384922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.332 [2024-11-28 08:29:26.384933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.332 [2024-11-28 08:29:26.384941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.332 [2024-11-28 08:29:26.384950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.332 [2024-11-28 08:29:26.397678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.332 [2024-11-28 08:29:26.398271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.332 [2024-11-28 08:29:26.398333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.332 [2024-11-28 08:29:26.398348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.332 [2024-11-28 08:29:26.398603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.332 [2024-11-28 08:29:26.398836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.332 [2024-11-28 08:29:26.398848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.332 [2024-11-28 08:29:26.398856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.332 [2024-11-28 08:29:26.398865] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.332 [2024-11-28 08:29:26.411586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.332 [2024-11-28 08:29:26.412244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.332 [2024-11-28 08:29:26.412308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.332 [2024-11-28 08:29:26.412321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.332 [2024-11-28 08:29:26.412574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.332 [2024-11-28 08:29:26.412800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.332 [2024-11-28 08:29:26.412809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.332 [2024-11-28 08:29:26.412817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.332 [2024-11-28 08:29:26.412826] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.332 [2024-11-28 08:29:26.425554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.332 [2024-11-28 08:29:26.426219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.332 [2024-11-28 08:29:26.426281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.332 [2024-11-28 08:29:26.426294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.332 [2024-11-28 08:29:26.426547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.332 [2024-11-28 08:29:26.426773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.332 [2024-11-28 08:29:26.426784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.333 [2024-11-28 08:29:26.426793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.333 [2024-11-28 08:29:26.426802] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.333 [2024-11-28 08:29:26.439546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.333 [2024-11-28 08:29:26.440151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.333 [2024-11-28 08:29:26.440230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.333 [2024-11-28 08:29:26.440243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.333 [2024-11-28 08:29:26.440497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.333 [2024-11-28 08:29:26.440723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.333 [2024-11-28 08:29:26.440733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.333 [2024-11-28 08:29:26.440755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.333 [2024-11-28 08:29:26.440765] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.333 [2024-11-28 08:29:26.453477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.333 [2024-11-28 08:29:26.454030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.333 [2024-11-28 08:29:26.454057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.333 [2024-11-28 08:29:26.454066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.333 [2024-11-28 08:29:26.454299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.333 [2024-11-28 08:29:26.454521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.333 [2024-11-28 08:29:26.454531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.333 [2024-11-28 08:29:26.454539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.333 [2024-11-28 08:29:26.454548] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.333 [2024-11-28 08:29:26.467263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.333 [2024-11-28 08:29:26.467830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.333 [2024-11-28 08:29:26.467855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.333 [2024-11-28 08:29:26.467864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.333 [2024-11-28 08:29:26.468084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.333 [2024-11-28 08:29:26.468315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.333 [2024-11-28 08:29:26.468326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.333 [2024-11-28 08:29:26.468334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.333 [2024-11-28 08:29:26.468342] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.333 [2024-11-28 08:29:26.481233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.333 [2024-11-28 08:29:26.481757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.333 [2024-11-28 08:29:26.481780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.333 [2024-11-28 08:29:26.481788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.333 [2024-11-28 08:29:26.482007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.333 [2024-11-28 08:29:26.482237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.333 [2024-11-28 08:29:26.482247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.333 [2024-11-28 08:29:26.482255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.333 [2024-11-28 08:29:26.482263] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.333 [2024-11-28 08:29:26.495039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.333 [2024-11-28 08:29:26.495577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.333 [2024-11-28 08:29:26.495602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.333 [2024-11-28 08:29:26.495610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.333 [2024-11-28 08:29:26.495830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.333 [2024-11-28 08:29:26.496050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.333 [2024-11-28 08:29:26.496061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.333 [2024-11-28 08:29:26.496069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.333 [2024-11-28 08:29:26.496077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.333 [2024-11-28 08:29:26.509025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.333 [2024-11-28 08:29:26.509615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.333 [2024-11-28 08:29:26.509640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.333 [2024-11-28 08:29:26.509648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.333 [2024-11-28 08:29:26.509867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.333 [2024-11-28 08:29:26.510087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.333 [2024-11-28 08:29:26.510105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.333 [2024-11-28 08:29:26.510113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.333 [2024-11-28 08:29:26.510121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.333 [2024-11-28 08:29:26.522837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.333 [2024-11-28 08:29:26.523492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.333 [2024-11-28 08:29:26.523554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.333 [2024-11-28 08:29:26.523567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.333 [2024-11-28 08:29:26.523821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.333 [2024-11-28 08:29:26.524047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.333 [2024-11-28 08:29:26.524056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.333 [2024-11-28 08:29:26.524065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.333 [2024-11-28 08:29:26.524074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.333 [2024-11-28 08:29:26.536805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.333 [2024-11-28 08:29:26.537481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.333 [2024-11-28 08:29:26.537544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.333 [2024-11-28 08:29:26.537565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.333 [2024-11-28 08:29:26.537819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.333 [2024-11-28 08:29:26.538045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.333 [2024-11-28 08:29:26.538054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.333 [2024-11-28 08:29:26.538062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.333 [2024-11-28 08:29:26.538071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.333 [2024-11-28 08:29:26.550784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.333 [2024-11-28 08:29:26.551475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.333 [2024-11-28 08:29:26.551537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.333 [2024-11-28 08:29:26.551550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.333 [2024-11-28 08:29:26.551804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.333 [2024-11-28 08:29:26.552029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.333 [2024-11-28 08:29:26.552039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.333 [2024-11-28 08:29:26.552047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.333 [2024-11-28 08:29:26.552056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.333 [2024-11-28 08:29:26.564589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.333 [2024-11-28 08:29:26.565188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.333 [2024-11-28 08:29:26.565217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.333 [2024-11-28 08:29:26.565226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.333 [2024-11-28 08:29:26.565449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.333 [2024-11-28 08:29:26.565669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.333 [2024-11-28 08:29:26.565680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.334 [2024-11-28 08:29:26.565687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.334 [2024-11-28 08:29:26.565695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.334 [2024-11-28 08:29:26.578392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.334 [2024-11-28 08:29:26.579055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.334 [2024-11-28 08:29:26.579116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.334 [2024-11-28 08:29:26.579130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.334 [2024-11-28 08:29:26.579398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.334 [2024-11-28 08:29:26.579633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.334 [2024-11-28 08:29:26.579643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.334 [2024-11-28 08:29:26.579651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.334 [2024-11-28 08:29:26.579660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.334 [2024-11-28 08:29:26.592212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.334 [2024-11-28 08:29:26.592802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.334 [2024-11-28 08:29:26.592830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.334 [2024-11-28 08:29:26.592839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.334 [2024-11-28 08:29:26.593062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.334 [2024-11-28 08:29:26.593295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.334 [2024-11-28 08:29:26.593308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.334 [2024-11-28 08:29:26.593315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.334 [2024-11-28 08:29:26.593323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.334 [2024-11-28 08:29:26.606054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.334 [2024-11-28 08:29:26.606748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.334 [2024-11-28 08:29:26.606812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.334 [2024-11-28 08:29:26.606826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.334 [2024-11-28 08:29:26.607080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.334 [2024-11-28 08:29:26.607319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.334 [2024-11-28 08:29:26.607330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.334 [2024-11-28 08:29:26.607338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.334 [2024-11-28 08:29:26.607347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.596 [2024-11-28 08:29:26.619853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.596 [2024-11-28 08:29:26.620520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-28 08:29:26.620584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.596 [2024-11-28 08:29:26.620597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.596 [2024-11-28 08:29:26.620853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.596 [2024-11-28 08:29:26.621082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.596 [2024-11-28 08:29:26.621092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.596 [2024-11-28 08:29:26.621108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.596 [2024-11-28 08:29:26.621117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.596 [2024-11-28 08:29:26.633849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.596 [2024-11-28 08:29:26.634549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-28 08:29:26.634613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.596 [2024-11-28 08:29:26.634626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.596 [2024-11-28 08:29:26.634880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.596 [2024-11-28 08:29:26.635106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.596 [2024-11-28 08:29:26.635117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.596 [2024-11-28 08:29:26.635125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.596 [2024-11-28 08:29:26.635134] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.596 [2024-11-28 08:29:26.647710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.596 [2024-11-28 08:29:26.648301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-28 08:29:26.648364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.596 [2024-11-28 08:29:26.648376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.596 [2024-11-28 08:29:26.648630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.596 [2024-11-28 08:29:26.648855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.596 [2024-11-28 08:29:26.648866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.596 [2024-11-28 08:29:26.648874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.596 [2024-11-28 08:29:26.648883] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.596 [2024-11-28 08:29:26.661632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.596 [2024-11-28 08:29:26.662274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-28 08:29:26.662339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.596 [2024-11-28 08:29:26.662352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.596 [2024-11-28 08:29:26.662606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.596 [2024-11-28 08:29:26.662832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.596 [2024-11-28 08:29:26.662841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.596 [2024-11-28 08:29:26.662850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.596 [2024-11-28 08:29:26.662859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.596 [2024-11-28 08:29:26.675588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.596 [2024-11-28 08:29:26.676143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-28 08:29:26.676182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.596 [2024-11-28 08:29:26.676193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.596 [2024-11-28 08:29:26.676416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.596 [2024-11-28 08:29:26.676637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.596 [2024-11-28 08:29:26.676648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.596 [2024-11-28 08:29:26.676656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.596 [2024-11-28 08:29:26.676664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.596 [2024-11-28 08:29:26.689546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.596 [2024-11-28 08:29:26.690220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-28 08:29:26.690284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.596 [2024-11-28 08:29:26.690299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.596 [2024-11-28 08:29:26.690553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.596 [2024-11-28 08:29:26.690779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.596 [2024-11-28 08:29:26.690792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.596 [2024-11-28 08:29:26.690801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.596 [2024-11-28 08:29:26.690810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.596 [2024-11-28 08:29:26.703348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.596 [2024-11-28 08:29:26.704022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-28 08:29:26.704084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.596 [2024-11-28 08:29:26.704098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.596 [2024-11-28 08:29:26.704361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.596 [2024-11-28 08:29:26.704589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.596 [2024-11-28 08:29:26.704599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.596 [2024-11-28 08:29:26.704608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.596 [2024-11-28 08:29:26.704617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.596 [2024-11-28 08:29:26.717184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.596 [2024-11-28 08:29:26.717773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-28 08:29:26.717802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.596 [2024-11-28 08:29:26.717819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.596 [2024-11-28 08:29:26.718041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.596 [2024-11-28 08:29:26.718272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.596 [2024-11-28 08:29:26.718283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.596 [2024-11-28 08:29:26.718291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.596 [2024-11-28 08:29:26.718300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.596 [2024-11-28 08:29:26.731003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.596 [2024-11-28 08:29:26.731650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.596 [2024-11-28 08:29:26.731708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.597 [2024-11-28 08:29:26.731720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.597 [2024-11-28 08:29:26.731970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.597 [2024-11-28 08:29:26.732206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.597 [2024-11-28 08:29:26.732216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.597 [2024-11-28 08:29:26.732225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.597 [2024-11-28 08:29:26.732233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.597 [2024-11-28 08:29:26.744974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.597 [2024-11-28 08:29:26.745591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-28 08:29:26.745620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.597 [2024-11-28 08:29:26.745629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.597 [2024-11-28 08:29:26.745849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.597 [2024-11-28 08:29:26.746070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.597 [2024-11-28 08:29:26.746079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.597 [2024-11-28 08:29:26.746087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.597 [2024-11-28 08:29:26.746096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.597 [2024-11-28 08:29:26.758856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.597 [2024-11-28 08:29:26.759431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-28 08:29:26.759459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.597 [2024-11-28 08:29:26.759467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.597 [2024-11-28 08:29:26.759689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.597 [2024-11-28 08:29:26.759919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.597 [2024-11-28 08:29:26.759929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.597 [2024-11-28 08:29:26.759937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.597 [2024-11-28 08:29:26.759945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.597 [2024-11-28 08:29:26.772708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.597 [2024-11-28 08:29:26.773278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-28 08:29:26.773304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.597 [2024-11-28 08:29:26.773313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.597 [2024-11-28 08:29:26.773533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.597 [2024-11-28 08:29:26.773753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.597 [2024-11-28 08:29:26.773762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.597 [2024-11-28 08:29:26.773771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.597 [2024-11-28 08:29:26.773778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.597 [2024-11-28 08:29:26.786522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.597 [2024-11-28 08:29:26.787087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-28 08:29:26.787110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.597 [2024-11-28 08:29:26.787118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.597 [2024-11-28 08:29:26.787347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.597 [2024-11-28 08:29:26.787567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.597 [2024-11-28 08:29:26.787577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.597 [2024-11-28 08:29:26.787585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.597 [2024-11-28 08:29:26.787593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.597 [2024-11-28 08:29:26.800312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.597 [2024-11-28 08:29:26.800877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-28 08:29:26.800900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.597 [2024-11-28 08:29:26.800909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.597 [2024-11-28 08:29:26.801128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.597 [2024-11-28 08:29:26.801358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.597 [2024-11-28 08:29:26.801376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.597 [2024-11-28 08:29:26.801391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.597 [2024-11-28 08:29:26.801399] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.597 [2024-11-28 08:29:26.814140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.597 [2024-11-28 08:29:26.814718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-28 08:29:26.814742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.597 [2024-11-28 08:29:26.814750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.597 [2024-11-28 08:29:26.814970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.597 [2024-11-28 08:29:26.815199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.597 [2024-11-28 08:29:26.815211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.597 [2024-11-28 08:29:26.815219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.597 [2024-11-28 08:29:26.815226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.597 [2024-11-28 08:29:26.827948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.597 [2024-11-28 08:29:26.828495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-28 08:29:26.828519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.597 [2024-11-28 08:29:26.828529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.597 [2024-11-28 08:29:26.828749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.597 [2024-11-28 08:29:26.828968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.597 [2024-11-28 08:29:26.828979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.597 [2024-11-28 08:29:26.828987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.597 [2024-11-28 08:29:26.828994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.597 [2024-11-28 08:29:26.841754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.597 [2024-11-28 08:29:26.842280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-28 08:29:26.842304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.597 [2024-11-28 08:29:26.842312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.597 [2024-11-28 08:29:26.842532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.597 [2024-11-28 08:29:26.842752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.597 [2024-11-28 08:29:26.842761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.597 [2024-11-28 08:29:26.842769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.597 [2024-11-28 08:29:26.842777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.597 [2024-11-28 08:29:26.855723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.597 [2024-11-28 08:29:26.856218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.597 [2024-11-28 08:29:26.856243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.597 [2024-11-28 08:29:26.856251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.597 [2024-11-28 08:29:26.856470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.597 [2024-11-28 08:29:26.856700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.597 [2024-11-28 08:29:26.856711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.597 [2024-11-28 08:29:26.856718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.597 [2024-11-28 08:29:26.856726] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.597 [2024-11-28 08:29:26.869682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.598 [2024-11-28 08:29:26.870258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.598 [2024-11-28 08:29:26.870302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.598 [2024-11-28 08:29:26.870311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.598 [2024-11-28 08:29:26.870549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.598 [2024-11-28 08:29:26.870771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.598 [2024-11-28 08:29:26.870780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.598 [2024-11-28 08:29:26.870788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.598 [2024-11-28 08:29:26.870796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.859 [2024-11-28 08:29:26.883557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.859 [2024-11-28 08:29:26.884086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.859 [2024-11-28 08:29:26.884113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.859 [2024-11-28 08:29:26.884121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.859 [2024-11-28 08:29:26.884353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.859 [2024-11-28 08:29:26.884575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.859 [2024-11-28 08:29:26.884585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.859 [2024-11-28 08:29:26.884593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.859 [2024-11-28 08:29:26.884601] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.859 [2024-11-28 08:29:26.896205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.859 [2024-11-28 08:29:26.896713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.859 [2024-11-28 08:29:26.896734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.859 [2024-11-28 08:29:26.896746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.859 [2024-11-28 08:29:26.896899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.859 [2024-11-28 08:29:26.897052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.859 [2024-11-28 08:29:26.897061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.859 [2024-11-28 08:29:26.897067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.859 [2024-11-28 08:29:26.897073] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.859 [2024-11-28 08:29:26.908947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.859 [2024-11-28 08:29:26.909430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.859 [2024-11-28 08:29:26.909449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.859 [2024-11-28 08:29:26.909455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.859 [2024-11-28 08:29:26.909606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.859 [2024-11-28 08:29:26.909759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.859 [2024-11-28 08:29:26.909765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.859 [2024-11-28 08:29:26.909771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.859 [2024-11-28 08:29:26.909777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.859 [2024-11-28 08:29:26.921657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.859 [2024-11-28 08:29:26.922131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.859 [2024-11-28 08:29:26.922148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.859 [2024-11-28 08:29:26.922153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.859 [2024-11-28 08:29:26.922313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.859 [2024-11-28 08:29:26.922464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.859 [2024-11-28 08:29:26.922471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.859 [2024-11-28 08:29:26.922476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.859 [2024-11-28 08:29:26.922482] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.859 [2024-11-28 08:29:26.934341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.859 [2024-11-28 08:29:26.934765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.859 [2024-11-28 08:29:26.934782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.859 [2024-11-28 08:29:26.934787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.859 [2024-11-28 08:29:26.934938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.859 [2024-11-28 08:29:26.935094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.859 [2024-11-28 08:29:26.935100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.859 [2024-11-28 08:29:26.935106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.859 [2024-11-28 08:29:26.935111] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.859 [2024-11-28 08:29:26.946993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.859 [2024-11-28 08:29:26.947433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.859 [2024-11-28 08:29:26.947449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.859 [2024-11-28 08:29:26.947454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.859 [2024-11-28 08:29:26.947605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.859 [2024-11-28 08:29:26.947755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.859 [2024-11-28 08:29:26.947762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.859 [2024-11-28 08:29:26.947767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.859 [2024-11-28 08:29:26.947772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.859 [2024-11-28 08:29:26.959701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.859 [2024-11-28 08:29:26.960171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.859 [2024-11-28 08:29:26.960188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.859 [2024-11-28 08:29:26.960193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.859 [2024-11-28 08:29:26.960344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.859 [2024-11-28 08:29:26.960494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.859 [2024-11-28 08:29:26.960500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.859 [2024-11-28 08:29:26.960505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.859 [2024-11-28 08:29:26.960510] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.859 [2024-11-28 08:29:26.972352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.859 [2024-11-28 08:29:26.972810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.859 [2024-11-28 08:29:26.972823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.859 [2024-11-28 08:29:26.972828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.859 [2024-11-28 08:29:26.972978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.859 [2024-11-28 08:29:26.973129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.859 [2024-11-28 08:29:26.973135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.859 [2024-11-28 08:29:26.973144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.859 [2024-11-28 08:29:26.973149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.859 [2024-11-28 08:29:26.984992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.859 [2024-11-28 08:29:26.985482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.859 [2024-11-28 08:29:26.985496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.859 [2024-11-28 08:29:26.985501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.859 [2024-11-28 08:29:26.985651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.859 [2024-11-28 08:29:26.985802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.859 [2024-11-28 08:29:26.985808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.859 [2024-11-28 08:29:26.985812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.859 [2024-11-28 08:29:26.985817] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.859 [2024-11-28 08:29:26.997645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.859 [2024-11-28 08:29:26.998071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.859 [2024-11-28 08:29:26.998084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.859 [2024-11-28 08:29:26.998090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.859 [2024-11-28 08:29:26.998244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.859 [2024-11-28 08:29:26.998395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.859 [2024-11-28 08:29:26.998401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.859 [2024-11-28 08:29:26.998406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.859 [2024-11-28 08:29:26.998411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.859 [2024-11-28 08:29:27.010270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.859 [2024-11-28 08:29:27.010732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.859 [2024-11-28 08:29:27.010746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.859 [2024-11-28 08:29:27.010751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.859 [2024-11-28 08:29:27.010900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.859 [2024-11-28 08:29:27.011051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.859 [2024-11-28 08:29:27.011057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.859 [2024-11-28 08:29:27.011063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.859 [2024-11-28 08:29:27.011067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.859 [2024-11-28 08:29:27.022975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.859 [2024-11-28 08:29:27.023320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.859 [2024-11-28 08:29:27.023334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.859 [2024-11-28 08:29:27.023339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.859 [2024-11-28 08:29:27.023489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.859 [2024-11-28 08:29:27.023639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.859 [2024-11-28 08:29:27.023645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.859 [2024-11-28 08:29:27.023650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.859 [2024-11-28 08:29:27.023655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.859 [2024-11-28 08:29:27.035624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.859 [2024-11-28 08:29:27.036071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.859 [2024-11-28 08:29:27.036083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.859 [2024-11-28 08:29:27.036089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.859 [2024-11-28 08:29:27.036249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.859 [2024-11-28 08:29:27.036399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.859 [2024-11-28 08:29:27.036405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.859 [2024-11-28 08:29:27.036411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.860 [2024-11-28 08:29:27.036416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.860 [2024-11-28 08:29:27.048242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.860 [2024-11-28 08:29:27.048775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.860 [2024-11-28 08:29:27.048805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.860 [2024-11-28 08:29:27.048813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.860 [2024-11-28 08:29:27.048979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.860 [2024-11-28 08:29:27.049132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.860 [2024-11-28 08:29:27.049138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.860 [2024-11-28 08:29:27.049144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.860 [2024-11-28 08:29:27.049149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.860 [2024-11-28 08:29:27.060857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.860 [2024-11-28 08:29:27.061404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.860 [2024-11-28 08:29:27.061434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.860 [2024-11-28 08:29:27.061446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.860 [2024-11-28 08:29:27.061612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.860 [2024-11-28 08:29:27.061764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.860 [2024-11-28 08:29:27.061770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.860 [2024-11-28 08:29:27.061776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.860 [2024-11-28 08:29:27.061781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.860 [2024-11-28 08:29:27.073481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.860 [2024-11-28 08:29:27.074008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.860 [2024-11-28 08:29:27.074038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.860 [2024-11-28 08:29:27.074046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.860 [2024-11-28 08:29:27.074219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.860 [2024-11-28 08:29:27.074373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.860 [2024-11-28 08:29:27.074380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.860 [2024-11-28 08:29:27.074385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.860 [2024-11-28 08:29:27.074391] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.860 [2024-11-28 08:29:27.086088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.860 [2024-11-28 08:29:27.086647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.860 [2024-11-28 08:29:27.086678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.860 [2024-11-28 08:29:27.086686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.860 [2024-11-28 08:29:27.086852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.860 [2024-11-28 08:29:27.087005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.860 [2024-11-28 08:29:27.087011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.860 [2024-11-28 08:29:27.087016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.860 [2024-11-28 08:29:27.087022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.860 [2024-11-28 08:29:27.098722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.860 [2024-11-28 08:29:27.099283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.860 [2024-11-28 08:29:27.099314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.860 [2024-11-28 08:29:27.099322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.860 [2024-11-28 08:29:27.099488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.860 [2024-11-28 08:29:27.099648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.860 [2024-11-28 08:29:27.099655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.860 [2024-11-28 08:29:27.099660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.860 [2024-11-28 08:29:27.099665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.860 [2024-11-28 08:29:27.111346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.860 [2024-11-28 08:29:27.111803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.860 [2024-11-28 08:29:27.111818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.860 [2024-11-28 08:29:27.111824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.860 [2024-11-28 08:29:27.111973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.860 [2024-11-28 08:29:27.112123] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.860 [2024-11-28 08:29:27.112130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.860 [2024-11-28 08:29:27.112135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.860 [2024-11-28 08:29:27.112139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.860 [2024-11-28 08:29:27.123960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.860 [2024-11-28 08:29:27.124548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.860 [2024-11-28 08:29:27.124578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.860 [2024-11-28 08:29:27.124587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.860 [2024-11-28 08:29:27.124752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.860 [2024-11-28 08:29:27.124905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.860 [2024-11-28 08:29:27.124911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.860 [2024-11-28 08:29:27.124917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.860 [2024-11-28 08:29:27.124922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.860 [2024-11-28 08:29:27.136621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.860 [2024-11-28 08:29:27.137078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.860 [2024-11-28 08:29:27.137093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:29.860 [2024-11-28 08:29:27.137098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:29.860 [2024-11-28 08:29:27.137253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:29.860 [2024-11-28 08:29:27.137403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.860 [2024-11-28 08:29:27.137409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.860 [2024-11-28 08:29:27.137418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.860 [2024-11-28 08:29:27.137423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.122 [2024-11-28 08:29:27.149253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.122 [2024-11-28 08:29:27.149659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.122 [2024-11-28 08:29:27.149671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.122 [2024-11-28 08:29:27.149677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.122 [2024-11-28 08:29:27.149827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.122 [2024-11-28 08:29:27.149976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.122 [2024-11-28 08:29:27.149982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.122 [2024-11-28 08:29:27.149987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.123 [2024-11-28 08:29:27.149992] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.123 [2024-11-28 08:29:27.161958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.123 [2024-11-28 08:29:27.162302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.123 [2024-11-28 08:29:27.162316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.123 [2024-11-28 08:29:27.162321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.123 [2024-11-28 08:29:27.162471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.123 [2024-11-28 08:29:27.162620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.123 [2024-11-28 08:29:27.162626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.123 [2024-11-28 08:29:27.162631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.123 [2024-11-28 08:29:27.162635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.123 [2024-11-28 08:29:27.174607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.123 [2024-11-28 08:29:27.175053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.123 [2024-11-28 08:29:27.175065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.123 [2024-11-28 08:29:27.175070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.123 [2024-11-28 08:29:27.175223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.123 [2024-11-28 08:29:27.175374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.123 [2024-11-28 08:29:27.175380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.123 [2024-11-28 08:29:27.175385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.123 [2024-11-28 08:29:27.175389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.123 [2024-11-28 08:29:27.187220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.123 [2024-11-28 08:29:27.187670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.123 [2024-11-28 08:29:27.187682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.123 [2024-11-28 08:29:27.187687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.123 [2024-11-28 08:29:27.187838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.123 [2024-11-28 08:29:27.187988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.123 [2024-11-28 08:29:27.187994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.123 [2024-11-28 08:29:27.187999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.123 [2024-11-28 08:29:27.188003] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.123 [2024-11-28 08:29:27.199821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.123 [2024-11-28 08:29:27.200270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.123 [2024-11-28 08:29:27.200283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.123 [2024-11-28 08:29:27.200288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.123 [2024-11-28 08:29:27.200437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.123 [2024-11-28 08:29:27.200587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.123 [2024-11-28 08:29:27.200593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.123 [2024-11-28 08:29:27.200598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.123 [2024-11-28 08:29:27.200602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.123 [2024-11-28 08:29:27.212423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.123 [2024-11-28 08:29:27.212959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.123 [2024-11-28 08:29:27.212989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.123 [2024-11-28 08:29:27.212998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.123 [2024-11-28 08:29:27.213171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.123 [2024-11-28 08:29:27.213324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.123 [2024-11-28 08:29:27.213331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.123 [2024-11-28 08:29:27.213336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.123 [2024-11-28 08:29:27.213341] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.123 [2024-11-28 08:29:27.225035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.123 [2024-11-28 08:29:27.225460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.123 [2024-11-28 08:29:27.225475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.123 [2024-11-28 08:29:27.225484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.123 [2024-11-28 08:29:27.225635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.123 [2024-11-28 08:29:27.225785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.123 [2024-11-28 08:29:27.225790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.123 [2024-11-28 08:29:27.225795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.123 [2024-11-28 08:29:27.225800] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.123 [2024-11-28 08:29:27.237640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.123 [2024-11-28 08:29:27.238051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.123 [2024-11-28 08:29:27.238064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.123 [2024-11-28 08:29:27.238069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.123 [2024-11-28 08:29:27.238223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.123 [2024-11-28 08:29:27.238374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.123 [2024-11-28 08:29:27.238380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.123 [2024-11-28 08:29:27.238385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.123 [2024-11-28 08:29:27.238390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.123 [2024-11-28 08:29:27.250354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.123 [2024-11-28 08:29:27.250773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.123 [2024-11-28 08:29:27.250785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.123 [2024-11-28 08:29:27.250791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.123 [2024-11-28 08:29:27.250940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.123 [2024-11-28 08:29:27.251090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.123 [2024-11-28 08:29:27.251096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.123 [2024-11-28 08:29:27.251101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.123 [2024-11-28 08:29:27.251106] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.123 [2024-11-28 08:29:27.262941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.123 [2024-11-28 08:29:27.263281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.123 [2024-11-28 08:29:27.263296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.123 [2024-11-28 08:29:27.263301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.123 [2024-11-28 08:29:27.263451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.123 [2024-11-28 08:29:27.263604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.123 [2024-11-28 08:29:27.263610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.123 [2024-11-28 08:29:27.263615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.123 [2024-11-28 08:29:27.263619] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.123 [2024-11-28 08:29:27.275590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.123 [2024-11-28 08:29:27.276042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.123 [2024-11-28 08:29:27.276055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.123 [2024-11-28 08:29:27.276060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.123 [2024-11-28 08:29:27.276215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.123 [2024-11-28 08:29:27.276365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.124 [2024-11-28 08:29:27.276371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.124 [2024-11-28 08:29:27.276376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.124 [2024-11-28 08:29:27.276380] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.124 [2024-11-28 08:29:27.288203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.124 [2024-11-28 08:29:27.288730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.124 [2024-11-28 08:29:27.288760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.124 [2024-11-28 08:29:27.288769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.124 [2024-11-28 08:29:27.288934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.124 [2024-11-28 08:29:27.289087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.124 [2024-11-28 08:29:27.289093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.124 [2024-11-28 08:29:27.289098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.124 [2024-11-28 08:29:27.289104] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.124 [2024-11-28 08:29:27.300802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.124 [2024-11-28 08:29:27.301261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.124 [2024-11-28 08:29:27.301276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.124 [2024-11-28 08:29:27.301282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.124 [2024-11-28 08:29:27.301432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.124 [2024-11-28 08:29:27.301582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.124 [2024-11-28 08:29:27.301588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.124 [2024-11-28 08:29:27.301597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.124 [2024-11-28 08:29:27.301602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.124 [2024-11-28 08:29:27.313430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.124 [2024-11-28 08:29:27.313848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.124 [2024-11-28 08:29:27.313879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.124 [2024-11-28 08:29:27.313888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.124 [2024-11-28 08:29:27.314053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.124 [2024-11-28 08:29:27.314213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.124 [2024-11-28 08:29:27.314220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.124 [2024-11-28 08:29:27.314226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.124 [2024-11-28 08:29:27.314231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.124 [2024-11-28 08:29:27.326069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.124 [2024-11-28 08:29:27.326562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.124 [2024-11-28 08:29:27.326592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.124 [2024-11-28 08:29:27.326601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.124 [2024-11-28 08:29:27.326766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.124 [2024-11-28 08:29:27.326919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.124 [2024-11-28 08:29:27.326926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.124 [2024-11-28 08:29:27.326932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.124 [2024-11-28 08:29:27.326937] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.124 [2024-11-28 08:29:27.338789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.124 [2024-11-28 08:29:27.339250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.124 [2024-11-28 08:29:27.339266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.124 [2024-11-28 08:29:27.339272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.124 [2024-11-28 08:29:27.339422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.124 [2024-11-28 08:29:27.339572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.124 [2024-11-28 08:29:27.339578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.124 [2024-11-28 08:29:27.339583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.124 [2024-11-28 08:29:27.339588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.124 [2024-11-28 08:29:27.351425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.124 [2024-11-28 08:29:27.351897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.124 [2024-11-28 08:29:27.351910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.124 [2024-11-28 08:29:27.351915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.124 [2024-11-28 08:29:27.352065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.124 [2024-11-28 08:29:27.352220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.124 [2024-11-28 08:29:27.352226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.124 [2024-11-28 08:29:27.352231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.124 [2024-11-28 08:29:27.352235] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.124 6958.50 IOPS, 27.18 MiB/s [2024-11-28T07:29:27.413Z] [2024-11-28 08:29:27.365204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.124 [2024-11-28 08:29:27.365742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.124 [2024-11-28 08:29:27.365772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.124 [2024-11-28 08:29:27.365781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.124 [2024-11-28 08:29:27.365946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.124 [2024-11-28 08:29:27.366099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.124 [2024-11-28 08:29:27.366106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.124 [2024-11-28 08:29:27.366111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.124 [2024-11-28 08:29:27.366117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.124 [2024-11-28 08:29:27.377808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.124 [2024-11-28 08:29:27.378383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.124 [2024-11-28 08:29:27.378413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.124 [2024-11-28 08:29:27.378422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.124 [2024-11-28 08:29:27.378587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.124 [2024-11-28 08:29:27.378739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.124 [2024-11-28 08:29:27.378745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.124 [2024-11-28 08:29:27.378751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.124 [2024-11-28 08:29:27.378757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.124 [2024-11-28 08:29:27.390432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.124 [2024-11-28 08:29:27.390847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.124 [2024-11-28 08:29:27.390861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.124 [2024-11-28 08:29:27.390870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.124 [2024-11-28 08:29:27.391020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.124 [2024-11-28 08:29:27.391176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.124 [2024-11-28 08:29:27.391182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.124 [2024-11-28 08:29:27.391187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.124 [2024-11-28 08:29:27.391192] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.124 [2024-11-28 08:29:27.403142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.124 [2024-11-28 08:29:27.403717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.124 [2024-11-28 08:29:27.403747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.124 [2024-11-28 08:29:27.403756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.125 [2024-11-28 08:29:27.403921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.125 [2024-11-28 08:29:27.404074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.125 [2024-11-28 08:29:27.404080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.125 [2024-11-28 08:29:27.404085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.125 [2024-11-28 08:29:27.404091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.388 [2024-11-28 08:29:27.415785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.388 [2024-11-28 08:29:27.416306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.388 [2024-11-28 08:29:27.416336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.388 [2024-11-28 08:29:27.416345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.388 [2024-11-28 08:29:27.416510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.388 [2024-11-28 08:29:27.416663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.388 [2024-11-28 08:29:27.416669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.388 [2024-11-28 08:29:27.416675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.388 [2024-11-28 08:29:27.416680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.388 [2024-11-28 08:29:27.428498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.388 [2024-11-28 08:29:27.429044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.388 [2024-11-28 08:29:27.429074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.388 [2024-11-28 08:29:27.429083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.388 [2024-11-28 08:29:27.429255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.388 [2024-11-28 08:29:27.429413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.388 [2024-11-28 08:29:27.429419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.388 [2024-11-28 08:29:27.429425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.388 [2024-11-28 08:29:27.429430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.388 [2024-11-28 08:29:27.441113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.388 [2024-11-28 08:29:27.441646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.388 [2024-11-28 08:29:27.441676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.388 [2024-11-28 08:29:27.441685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.388 [2024-11-28 08:29:27.441850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.388 [2024-11-28 08:29:27.442003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.388 [2024-11-28 08:29:27.442010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.388 [2024-11-28 08:29:27.442015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.388 [2024-11-28 08:29:27.442021] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.388 [2024-11-28 08:29:27.453707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.388 [2024-11-28 08:29:27.454167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.388 [2024-11-28 08:29:27.454182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.388 [2024-11-28 08:29:27.454188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.388 [2024-11-28 08:29:27.454338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.388 [2024-11-28 08:29:27.454488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.388 [2024-11-28 08:29:27.454494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.388 [2024-11-28 08:29:27.454499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.388 [2024-11-28 08:29:27.454504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.388 [2024-11-28 08:29:27.466323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.388 [2024-11-28 08:29:27.466756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.388 [2024-11-28 08:29:27.466769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.388 [2024-11-28 08:29:27.466775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.388 [2024-11-28 08:29:27.466925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.388 [2024-11-28 08:29:27.467074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.388 [2024-11-28 08:29:27.467080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.388 [2024-11-28 08:29:27.467088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.388 [2024-11-28 08:29:27.467093] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.388 [2024-11-28 08:29:27.478909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.388 [2024-11-28 08:29:27.479512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.388 [2024-11-28 08:29:27.479542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.388 [2024-11-28 08:29:27.479551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.388 [2024-11-28 08:29:27.479717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.388 [2024-11-28 08:29:27.479869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.388 [2024-11-28 08:29:27.479876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.388 [2024-11-28 08:29:27.479881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.388 [2024-11-28 08:29:27.479887] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.388 [2024-11-28 08:29:27.491569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.388 [2024-11-28 08:29:27.492123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.388 [2024-11-28 08:29:27.492153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.388 [2024-11-28 08:29:27.492169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.388 [2024-11-28 08:29:27.492335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.388 [2024-11-28 08:29:27.492487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.389 [2024-11-28 08:29:27.492494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.389 [2024-11-28 08:29:27.492499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.389 [2024-11-28 08:29:27.492505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.389 [2024-11-28 08:29:27.504179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.389 [2024-11-28 08:29:27.504775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.389 [2024-11-28 08:29:27.504805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.389 [2024-11-28 08:29:27.504815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.389 [2024-11-28 08:29:27.504980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.389 [2024-11-28 08:29:27.505133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.389 [2024-11-28 08:29:27.505140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.389 [2024-11-28 08:29:27.505146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.389 [2024-11-28 08:29:27.505152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.389 [2024-11-28 08:29:27.516846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.389 [2024-11-28 08:29:27.517261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.389 [2024-11-28 08:29:27.517277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.389 [2024-11-28 08:29:27.517283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.389 [2024-11-28 08:29:27.517433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.389 [2024-11-28 08:29:27.517583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.389 [2024-11-28 08:29:27.517589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.389 [2024-11-28 08:29:27.517594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.389 [2024-11-28 08:29:27.517599] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.389 [2024-11-28 08:29:27.529570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.389 [2024-11-28 08:29:27.530021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.389 [2024-11-28 08:29:27.530034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.389 [2024-11-28 08:29:27.530039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.389 [2024-11-28 08:29:27.530193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.389 [2024-11-28 08:29:27.530343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.389 [2024-11-28 08:29:27.530349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.389 [2024-11-28 08:29:27.530354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.389 [2024-11-28 08:29:27.530359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.389 [2024-11-28 08:29:27.542201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.389 [2024-11-28 08:29:27.542665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.389 [2024-11-28 08:29:27.542678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.389 [2024-11-28 08:29:27.542683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.389 [2024-11-28 08:29:27.542834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.389 [2024-11-28 08:29:27.542984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.389 [2024-11-28 08:29:27.542990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.389 [2024-11-28 08:29:27.542994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.389 [2024-11-28 08:29:27.542999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.389 [2024-11-28 08:29:27.554828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.389 [2024-11-28 08:29:27.555270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.389 [2024-11-28 08:29:27.555300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.389 [2024-11-28 08:29:27.555313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.389 [2024-11-28 08:29:27.555481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.389 [2024-11-28 08:29:27.555633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.389 [2024-11-28 08:29:27.555640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.389 [2024-11-28 08:29:27.555646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.389 [2024-11-28 08:29:27.555651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.389 [2024-11-28 08:29:27.567477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.389 [2024-11-28 08:29:27.568008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.389 [2024-11-28 08:29:27.568038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.389 [2024-11-28 08:29:27.568047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.389 [2024-11-28 08:29:27.568217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.389 [2024-11-28 08:29:27.568371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.389 [2024-11-28 08:29:27.568377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.389 [2024-11-28 08:29:27.568383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.389 [2024-11-28 08:29:27.568389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.389 [2024-11-28 08:29:27.580256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.389 [2024-11-28 08:29:27.580813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.389 [2024-11-28 08:29:27.580843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.389 [2024-11-28 08:29:27.580851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.389 [2024-11-28 08:29:27.581017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.389 [2024-11-28 08:29:27.581177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.389 [2024-11-28 08:29:27.581184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.389 [2024-11-28 08:29:27.581189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.389 [2024-11-28 08:29:27.581195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.389 [2024-11-28 08:29:27.592878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.389 [2024-11-28 08:29:27.593458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.389 [2024-11-28 08:29:27.593488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.389 [2024-11-28 08:29:27.593497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.389 [2024-11-28 08:29:27.593662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.389 [2024-11-28 08:29:27.593819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.389 [2024-11-28 08:29:27.593825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.389 [2024-11-28 08:29:27.593831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.389 [2024-11-28 08:29:27.593836] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.389 [2024-11-28 08:29:27.605515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.389 [2024-11-28 08:29:27.605953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.389 [2024-11-28 08:29:27.605983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.389 [2024-11-28 08:29:27.605991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.389 [2024-11-28 08:29:27.606157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.389 [2024-11-28 08:29:27.606318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.389 [2024-11-28 08:29:27.606325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.389 [2024-11-28 08:29:27.606330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.389 [2024-11-28 08:29:27.606336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.389 [2024-11-28 08:29:27.618144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.389 [2024-11-28 08:29:27.618670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.389 [2024-11-28 08:29:27.618701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.389 [2024-11-28 08:29:27.618709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.390 [2024-11-28 08:29:27.618875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.390 [2024-11-28 08:29:27.619027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.390 [2024-11-28 08:29:27.619034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.390 [2024-11-28 08:29:27.619039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.390 [2024-11-28 08:29:27.619045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.390 [2024-11-28 08:29:27.630871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.390 [2024-11-28 08:29:27.631357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.390 [2024-11-28 08:29:27.631387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.390 [2024-11-28 08:29:27.631396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.390 [2024-11-28 08:29:27.631561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.390 [2024-11-28 08:29:27.631714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.390 [2024-11-28 08:29:27.631720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.390 [2024-11-28 08:29:27.631729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.390 [2024-11-28 08:29:27.631734] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.390 [2024-11-28 08:29:27.643594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.390 [2024-11-28 08:29:27.644148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.390 [2024-11-28 08:29:27.644184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.390 [2024-11-28 08:29:27.644192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.390 [2024-11-28 08:29:27.644357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.390 [2024-11-28 08:29:27.644510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.390 [2024-11-28 08:29:27.644516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.390 [2024-11-28 08:29:27.644522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.390 [2024-11-28 08:29:27.644527] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.390 [2024-11-28 08:29:27.656206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.390 [2024-11-28 08:29:27.656763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.390 [2024-11-28 08:29:27.656793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.390 [2024-11-28 08:29:27.656802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.390 [2024-11-28 08:29:27.656967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.390 [2024-11-28 08:29:27.657120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.390 [2024-11-28 08:29:27.657126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.390 [2024-11-28 08:29:27.657132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.390 [2024-11-28 08:29:27.657137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.390 [2024-11-28 08:29:27.668829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.390 [2024-11-28 08:29:27.669242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.390 [2024-11-28 08:29:27.669272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.390 [2024-11-28 08:29:27.669280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.390 [2024-11-28 08:29:27.669446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.390 [2024-11-28 08:29:27.669598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.390 [2024-11-28 08:29:27.669604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.390 [2024-11-28 08:29:27.669610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.390 [2024-11-28 08:29:27.669615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.652 [2024-11-28 08:29:27.681457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.652 [2024-11-28 08:29:27.682004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-11-28 08:29:27.682034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.652 [2024-11-28 08:29:27.682043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.652 [2024-11-28 08:29:27.682216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.652 [2024-11-28 08:29:27.682369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.652 [2024-11-28 08:29:27.682375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.652 [2024-11-28 08:29:27.682381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.652 [2024-11-28 08:29:27.682387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.652 [2024-11-28 08:29:27.694054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.652 [2024-11-28 08:29:27.694606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.652 [2024-11-28 08:29:27.694637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.653 [2024-11-28 08:29:27.694645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.653 [2024-11-28 08:29:27.694811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.653 [2024-11-28 08:29:27.694964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.653 [2024-11-28 08:29:27.694970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.653 [2024-11-28 08:29:27.694975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.653 [2024-11-28 08:29:27.694981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.653 [2024-11-28 08:29:27.706667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.653 [2024-11-28 08:29:27.707218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-11-28 08:29:27.707249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.653 [2024-11-28 08:29:27.707258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.653 [2024-11-28 08:29:27.707425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.653 [2024-11-28 08:29:27.707578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.653 [2024-11-28 08:29:27.707584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.653 [2024-11-28 08:29:27.707590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.653 [2024-11-28 08:29:27.707595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.653 [2024-11-28 08:29:27.719287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.653 [2024-11-28 08:29:27.719746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-11-28 08:29:27.719761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.653 [2024-11-28 08:29:27.719774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.653 [2024-11-28 08:29:27.719925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.653 [2024-11-28 08:29:27.720075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.653 [2024-11-28 08:29:27.720081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.653 [2024-11-28 08:29:27.720086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.653 [2024-11-28 08:29:27.720091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.653 [2024-11-28 08:29:27.731906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.653 [2024-11-28 08:29:27.732472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-11-28 08:29:27.732502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.653 [2024-11-28 08:29:27.732511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.653 [2024-11-28 08:29:27.732676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.653 [2024-11-28 08:29:27.732829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.653 [2024-11-28 08:29:27.732835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.653 [2024-11-28 08:29:27.732840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.653 [2024-11-28 08:29:27.732846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.653 [2024-11-28 08:29:27.744541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.653 [2024-11-28 08:29:27.745086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-11-28 08:29:27.745116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.653 [2024-11-28 08:29:27.745125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.653 [2024-11-28 08:29:27.745302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.653 [2024-11-28 08:29:27.745455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.653 [2024-11-28 08:29:27.745462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.653 [2024-11-28 08:29:27.745467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.653 [2024-11-28 08:29:27.745473] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.653 [2024-11-28 08:29:27.757143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.653 [2024-11-28 08:29:27.757712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-11-28 08:29:27.757742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.653 [2024-11-28 08:29:27.757751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.653 [2024-11-28 08:29:27.757918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.653 [2024-11-28 08:29:27.758075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.653 [2024-11-28 08:29:27.758081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.653 [2024-11-28 08:29:27.758087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.653 [2024-11-28 08:29:27.758093] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.653 [2024-11-28 08:29:27.769786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.653 [2024-11-28 08:29:27.770287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-11-28 08:29:27.770318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.653 [2024-11-28 08:29:27.770327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.653 [2024-11-28 08:29:27.770495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.653 [2024-11-28 08:29:27.770647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.653 [2024-11-28 08:29:27.770653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.653 [2024-11-28 08:29:27.770659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.653 [2024-11-28 08:29:27.770664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.653 [2024-11-28 08:29:27.782488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.653 [2024-11-28 08:29:27.782948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-11-28 08:29:27.782963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.653 [2024-11-28 08:29:27.782968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.653 [2024-11-28 08:29:27.783118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.653 [2024-11-28 08:29:27.783273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.653 [2024-11-28 08:29:27.783280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.653 [2024-11-28 08:29:27.783284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.653 [2024-11-28 08:29:27.783290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.653 [2024-11-28 08:29:27.795093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.653 [2024-11-28 08:29:27.795553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-11-28 08:29:27.795566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.653 [2024-11-28 08:29:27.795572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.653 [2024-11-28 08:29:27.795721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.653 [2024-11-28 08:29:27.795871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.653 [2024-11-28 08:29:27.795877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.653 [2024-11-28 08:29:27.795886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.653 [2024-11-28 08:29:27.795891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.653 [2024-11-28 08:29:27.807711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.653 [2024-11-28 08:29:27.808156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.653 [2024-11-28 08:29:27.808173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.653 [2024-11-28 08:29:27.808179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.653 [2024-11-28 08:29:27.808328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.653 [2024-11-28 08:29:27.808478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.653 [2024-11-28 08:29:27.808484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.653 [2024-11-28 08:29:27.808489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.653 [2024-11-28 08:29:27.808494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.653 [2024-11-28 08:29:27.820303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.654 [2024-11-28 08:29:27.820829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-11-28 08:29:27.820859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.654 [2024-11-28 08:29:27.820868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.654 [2024-11-28 08:29:27.821033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.654 [2024-11-28 08:29:27.821195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.654 [2024-11-28 08:29:27.821203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.654 [2024-11-28 08:29:27.821208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.654 [2024-11-28 08:29:27.821214] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.654 [2024-11-28 08:29:27.833022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.654 [2024-11-28 08:29:27.833585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-11-28 08:29:27.833615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.654 [2024-11-28 08:29:27.833624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.654 [2024-11-28 08:29:27.833789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.654 [2024-11-28 08:29:27.833942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.654 [2024-11-28 08:29:27.833948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.654 [2024-11-28 08:29:27.833953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.654 [2024-11-28 08:29:27.833959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.654 [2024-11-28 08:29:27.845655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.654 [2024-11-28 08:29:27.846114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-11-28 08:29:27.846129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.654 [2024-11-28 08:29:27.846134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.654 [2024-11-28 08:29:27.846289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.654 [2024-11-28 08:29:27.846440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.654 [2024-11-28 08:29:27.846445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.654 [2024-11-28 08:29:27.846450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.654 [2024-11-28 08:29:27.846455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.654 [2024-11-28 08:29:27.858277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.654 [2024-11-28 08:29:27.858771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-11-28 08:29:27.858800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.654 [2024-11-28 08:29:27.858809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.654 [2024-11-28 08:29:27.858974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.654 [2024-11-28 08:29:27.859127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.654 [2024-11-28 08:29:27.859133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.654 [2024-11-28 08:29:27.859138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.654 [2024-11-28 08:29:27.859144] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.654 [2024-11-28 08:29:27.870982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.654 [2024-11-28 08:29:27.871543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-11-28 08:29:27.871573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.654 [2024-11-28 08:29:27.871582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.654 [2024-11-28 08:29:27.871748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.654 [2024-11-28 08:29:27.871900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.654 [2024-11-28 08:29:27.871907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.654 [2024-11-28 08:29:27.871912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.654 [2024-11-28 08:29:27.871918] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.654 [2024-11-28 08:29:27.883599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.654 [2024-11-28 08:29:27.884012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-11-28 08:29:27.884026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.654 [2024-11-28 08:29:27.884035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.654 [2024-11-28 08:29:27.884193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.654 [2024-11-28 08:29:27.884344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.654 [2024-11-28 08:29:27.884350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.654 [2024-11-28 08:29:27.884355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.654 [2024-11-28 08:29:27.884360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.654 [2024-11-28 08:29:27.896313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.654 [2024-11-28 08:29:27.896839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-11-28 08:29:27.896869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.654 [2024-11-28 08:29:27.896878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.654 [2024-11-28 08:29:27.897043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.654 [2024-11-28 08:29:27.897204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.654 [2024-11-28 08:29:27.897211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.654 [2024-11-28 08:29:27.897216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.654 [2024-11-28 08:29:27.897222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.654 [2024-11-28 08:29:27.909038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.654 [2024-11-28 08:29:27.909516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-11-28 08:29:27.909531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.654 [2024-11-28 08:29:27.909537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.654 [2024-11-28 08:29:27.909687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.654 [2024-11-28 08:29:27.909837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.654 [2024-11-28 08:29:27.909843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.654 [2024-11-28 08:29:27.909848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.654 [2024-11-28 08:29:27.909852] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.654 [2024-11-28 08:29:27.921665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.654 [2024-11-28 08:29:27.922111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-11-28 08:29:27.922123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.654 [2024-11-28 08:29:27.922128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.654 [2024-11-28 08:29:27.922284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.654 [2024-11-28 08:29:27.922437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.654 [2024-11-28 08:29:27.922443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.654 [2024-11-28 08:29:27.922448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.654 [2024-11-28 08:29:27.922453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.654 [2024-11-28 08:29:27.934264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.654 [2024-11-28 08:29:27.934803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.654 [2024-11-28 08:29:27.934833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.654 [2024-11-28 08:29:27.934842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.654 [2024-11-28 08:29:27.935007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.654 [2024-11-28 08:29:27.935168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.654 [2024-11-28 08:29:27.935175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.654 [2024-11-28 08:29:27.935180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.654 [2024-11-28 08:29:27.935186] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.918 [2024-11-28 08:29:27.946870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.918 [2024-11-28 08:29:27.947280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.918 [2024-11-28 08:29:27.947309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.918 [2024-11-28 08:29:27.947318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.918 [2024-11-28 08:29:27.947485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.918 [2024-11-28 08:29:27.947638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.918 [2024-11-28 08:29:27.947644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.918 [2024-11-28 08:29:27.947650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.918 [2024-11-28 08:29:27.947656] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.918 [2024-11-28 08:29:27.959484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.918 [2024-11-28 08:29:27.960034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.918 [2024-11-28 08:29:27.960064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.918 [2024-11-28 08:29:27.960073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.918 [2024-11-28 08:29:27.960256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.918 [2024-11-28 08:29:27.960410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.918 [2024-11-28 08:29:27.960417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.918 [2024-11-28 08:29:27.960426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.918 [2024-11-28 08:29:27.960432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.918 [2024-11-28 08:29:27.972107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.918 [2024-11-28 08:29:27.972662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.918 [2024-11-28 08:29:27.972692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.918 [2024-11-28 08:29:27.972701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.918 [2024-11-28 08:29:27.972865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.918 [2024-11-28 08:29:27.973018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.918 [2024-11-28 08:29:27.973024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.918 [2024-11-28 08:29:27.973030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.918 [2024-11-28 08:29:27.973035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.918 [2024-11-28 08:29:27.984719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.918 [2024-11-28 08:29:27.985242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.918 [2024-11-28 08:29:27.985272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.918 [2024-11-28 08:29:27.985281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.918 [2024-11-28 08:29:27.985446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.918 [2024-11-28 08:29:27.985599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.918 [2024-11-28 08:29:27.985605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.918 [2024-11-28 08:29:27.985610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.918 [2024-11-28 08:29:27.985616] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.918 [2024-11-28 08:29:27.997446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.918 [2024-11-28 08:29:27.997961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.918 [2024-11-28 08:29:27.997991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.918 [2024-11-28 08:29:27.998000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.918 [2024-11-28 08:29:27.998174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.918 [2024-11-28 08:29:27.998327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.918 [2024-11-28 08:29:27.998333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.918 [2024-11-28 08:29:27.998338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.918 [2024-11-28 08:29:27.998344] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.918 [2024-11-28 08:29:28.010172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.918 [2024-11-28 08:29:28.010746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.918 [2024-11-28 08:29:28.010776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.918 [2024-11-28 08:29:28.010785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.918 [2024-11-28 08:29:28.010951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.918 [2024-11-28 08:29:28.011104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.918 [2024-11-28 08:29:28.011110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.918 [2024-11-28 08:29:28.011115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.918 [2024-11-28 08:29:28.011121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.918 [2024-11-28 08:29:28.022800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.918 [2024-11-28 08:29:28.023150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.918 [2024-11-28 08:29:28.023169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.918 [2024-11-28 08:29:28.023175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.918 [2024-11-28 08:29:28.023326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.918 [2024-11-28 08:29:28.023476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.918 [2024-11-28 08:29:28.023481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.918 [2024-11-28 08:29:28.023486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.918 [2024-11-28 08:29:28.023491] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.918 [2024-11-28 08:29:28.035473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.918 [2024-11-28 08:29:28.036033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.918 [2024-11-28 08:29:28.036063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.918 [2024-11-28 08:29:28.036071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.918 [2024-11-28 08:29:28.036244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.918 [2024-11-28 08:29:28.036397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.918 [2024-11-28 08:29:28.036404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.918 [2024-11-28 08:29:28.036409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.918 [2024-11-28 08:29:28.036415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.918 [2024-11-28 08:29:28.048103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.918 [2024-11-28 08:29:28.048644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.918 [2024-11-28 08:29:28.048675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.918 [2024-11-28 08:29:28.048687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.919 [2024-11-28 08:29:28.048856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.919 [2024-11-28 08:29:28.049008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.919 [2024-11-28 08:29:28.049014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.919 [2024-11-28 08:29:28.049020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.919 [2024-11-28 08:29:28.049026] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.919 [2024-11-28 08:29:28.060791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.919 [2024-11-28 08:29:28.061220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.919 [2024-11-28 08:29:28.061236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.919 [2024-11-28 08:29:28.061241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.919 [2024-11-28 08:29:28.061392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.919 [2024-11-28 08:29:28.061542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.919 [2024-11-28 08:29:28.061548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.919 [2024-11-28 08:29:28.061552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.919 [2024-11-28 08:29:28.061557] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.919 [2024-11-28 08:29:28.073507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.919 [2024-11-28 08:29:28.074048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.919 [2024-11-28 08:29:28.074078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.919 [2024-11-28 08:29:28.074087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.919 [2024-11-28 08:29:28.074259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.919 [2024-11-28 08:29:28.074413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.919 [2024-11-28 08:29:28.074419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.919 [2024-11-28 08:29:28.074425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.919 [2024-11-28 08:29:28.074430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.919 [2024-11-28 08:29:28.086113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.919 [2024-11-28 08:29:28.086682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.919 [2024-11-28 08:29:28.086713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.919 [2024-11-28 08:29:28.086721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.919 [2024-11-28 08:29:28.086887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.919 [2024-11-28 08:29:28.087043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.919 [2024-11-28 08:29:28.087050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.919 [2024-11-28 08:29:28.087055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.919 [2024-11-28 08:29:28.087061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.919 [2024-11-28 08:29:28.098744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.919 [2024-11-28 08:29:28.099372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.919 [2024-11-28 08:29:28.099402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.919 [2024-11-28 08:29:28.099411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.919 [2024-11-28 08:29:28.099580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.919 [2024-11-28 08:29:28.099733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.919 [2024-11-28 08:29:28.099739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.919 [2024-11-28 08:29:28.099744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.919 [2024-11-28 08:29:28.099750] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.919 [2024-11-28 08:29:28.111436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.919 [2024-11-28 08:29:28.111872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.919 [2024-11-28 08:29:28.111901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.919 [2024-11-28 08:29:28.111909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.919 [2024-11-28 08:29:28.112076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.919 [2024-11-28 08:29:28.112236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.919 [2024-11-28 08:29:28.112243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.919 [2024-11-28 08:29:28.112248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.919 [2024-11-28 08:29:28.112253] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.919 [2024-11-28 08:29:28.124073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.919 [2024-11-28 08:29:28.124639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.919 [2024-11-28 08:29:28.124669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.919 [2024-11-28 08:29:28.124678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.919 [2024-11-28 08:29:28.124843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.919 [2024-11-28 08:29:28.124996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.919 [2024-11-28 08:29:28.125002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.919 [2024-11-28 08:29:28.125011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.919 [2024-11-28 08:29:28.125016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.919 [2024-11-28 08:29:28.136695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.919 [2024-11-28 08:29:28.137239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.919 [2024-11-28 08:29:28.137270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.919 [2024-11-28 08:29:28.137278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.919 [2024-11-28 08:29:28.137443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.919 [2024-11-28 08:29:28.137596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.919 [2024-11-28 08:29:28.137602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.919 [2024-11-28 08:29:28.137607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.919 [2024-11-28 08:29:28.137613] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.919 [2024-11-28 08:29:28.149309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.919 [2024-11-28 08:29:28.149860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.919 [2024-11-28 08:29:28.149890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.919 [2024-11-28 08:29:28.149899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.919 [2024-11-28 08:29:28.150064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.919 [2024-11-28 08:29:28.150224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.919 [2024-11-28 08:29:28.150231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.919 [2024-11-28 08:29:28.150237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.919 [2024-11-28 08:29:28.150243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.919 [2024-11-28 08:29:28.161922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.919 [2024-11-28 08:29:28.162504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.919 [2024-11-28 08:29:28.162534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.919 [2024-11-28 08:29:28.162543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.919 [2024-11-28 08:29:28.162708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.919 [2024-11-28 08:29:28.162861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.919 [2024-11-28 08:29:28.162867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.919 [2024-11-28 08:29:28.162872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.919 [2024-11-28 08:29:28.162878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.919 [2024-11-28 08:29:28.174568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.919 [2024-11-28 08:29:28.175166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.920 [2024-11-28 08:29:28.175196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.920 [2024-11-28 08:29:28.175205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.920 [2024-11-28 08:29:28.175370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.920 [2024-11-28 08:29:28.175523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.920 [2024-11-28 08:29:28.175529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.920 [2024-11-28 08:29:28.175534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.920 [2024-11-28 08:29:28.175540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.920 [2024-11-28 08:29:28.187218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.920 [2024-11-28 08:29:28.187744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.920 [2024-11-28 08:29:28.187774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.920 [2024-11-28 08:29:28.187783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.920 [2024-11-28 08:29:28.187948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.920 [2024-11-28 08:29:28.188101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.920 [2024-11-28 08:29:28.188107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.920 [2024-11-28 08:29:28.188113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.920 [2024-11-28 08:29:28.188118] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.920 [2024-11-28 08:29:28.199941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.920 [2024-11-28 08:29:28.200529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.920 [2024-11-28 08:29:28.200559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:30.920 [2024-11-28 08:29:28.200568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:30.920 [2024-11-28 08:29:28.200733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:30.920 [2024-11-28 08:29:28.200886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.920 [2024-11-28 08:29:28.200892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.920 [2024-11-28 08:29:28.200898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.920 [2024-11-28 08:29:28.200903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.183 [2024-11-28 08:29:28.212594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.183 [2024-11-28 08:29:28.213142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.183 [2024-11-28 08:29:28.213177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.183 [2024-11-28 08:29:28.213190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.183 [2024-11-28 08:29:28.213358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.183 [2024-11-28 08:29:28.213510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.183 [2024-11-28 08:29:28.213516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.183 [2024-11-28 08:29:28.213522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.183 [2024-11-28 08:29:28.213528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.183 [2024-11-28 08:29:28.225216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.183 [2024-11-28 08:29:28.225774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.183 [2024-11-28 08:29:28.225804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.183 [2024-11-28 08:29:28.225813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.183 [2024-11-28 08:29:28.225978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.183 [2024-11-28 08:29:28.226130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.183 [2024-11-28 08:29:28.226137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.183 [2024-11-28 08:29:28.226142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.183 [2024-11-28 08:29:28.226147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.183 [2024-11-28 08:29:28.237831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.183 [2024-11-28 08:29:28.238292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.183 [2024-11-28 08:29:28.238322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.183 [2024-11-28 08:29:28.238331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.183 [2024-11-28 08:29:28.238498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.183 [2024-11-28 08:29:28.238651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.183 [2024-11-28 08:29:28.238657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.183 [2024-11-28 08:29:28.238663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.183 [2024-11-28 08:29:28.238669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.183 [2024-11-28 08:29:28.250504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.183 [2024-11-28 08:29:28.251037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.183 [2024-11-28 08:29:28.251067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.183 [2024-11-28 08:29:28.251076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.183 [2024-11-28 08:29:28.251250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.183 [2024-11-28 08:29:28.251407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.183 [2024-11-28 08:29:28.251413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.183 [2024-11-28 08:29:28.251419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.183 [2024-11-28 08:29:28.251424] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.183 [2024-11-28 08:29:28.263099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.183 [2024-11-28 08:29:28.263694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.183 [2024-11-28 08:29:28.263723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.183 [2024-11-28 08:29:28.263732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.183 [2024-11-28 08:29:28.263900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.183 [2024-11-28 08:29:28.264053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.183 [2024-11-28 08:29:28.264060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.183 [2024-11-28 08:29:28.264065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.183 [2024-11-28 08:29:28.264071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.183 [2024-11-28 08:29:28.275760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.183 [2024-11-28 08:29:28.276261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.183 [2024-11-28 08:29:28.276291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.183 [2024-11-28 08:29:28.276299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.183 [2024-11-28 08:29:28.276467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.183 [2024-11-28 08:29:28.276620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.183 [2024-11-28 08:29:28.276626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.183 [2024-11-28 08:29:28.276632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.183 [2024-11-28 08:29:28.276637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.183 [2024-11-28 08:29:28.288468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.183 [2024-11-28 08:29:28.288926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.183 [2024-11-28 08:29:28.288941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.183 [2024-11-28 08:29:28.288946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.183 [2024-11-28 08:29:28.289096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.183 [2024-11-28 08:29:28.289253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.183 [2024-11-28 08:29:28.289260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.183 [2024-11-28 08:29:28.289269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.183 [2024-11-28 08:29:28.289274] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.183 [2024-11-28 08:29:28.301076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.183 [2024-11-28 08:29:28.301524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.183 [2024-11-28 08:29:28.301553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.183 [2024-11-28 08:29:28.301562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.183 [2024-11-28 08:29:28.301729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.183 [2024-11-28 08:29:28.301882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.183 [2024-11-28 08:29:28.301889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.183 [2024-11-28 08:29:28.301894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.183 [2024-11-28 08:29:28.301900] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.183 [2024-11-28 08:29:28.313721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.183 [2024-11-28 08:29:28.314201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.183 [2024-11-28 08:29:28.314222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.183 [2024-11-28 08:29:28.314229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.183 [2024-11-28 08:29:28.314384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.183 [2024-11-28 08:29:28.314535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.183 [2024-11-28 08:29:28.314541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.183 [2024-11-28 08:29:28.314546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.184 [2024-11-28 08:29:28.314551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.184 [2024-11-28 08:29:28.326361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.184 [2024-11-28 08:29:28.326880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.184 [2024-11-28 08:29:28.326909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.184 [2024-11-28 08:29:28.326918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.184 [2024-11-28 08:29:28.327084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.184 [2024-11-28 08:29:28.327245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.184 [2024-11-28 08:29:28.327252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.184 [2024-11-28 08:29:28.327258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.184 [2024-11-28 08:29:28.327263] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.184 [2024-11-28 08:29:28.339078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.184 [2024-11-28 08:29:28.339600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.184 [2024-11-28 08:29:28.339630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.184 [2024-11-28 08:29:28.339639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.184 [2024-11-28 08:29:28.339804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.184 [2024-11-28 08:29:28.339957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.184 [2024-11-28 08:29:28.339963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.184 [2024-11-28 08:29:28.339968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.184 [2024-11-28 08:29:28.339974] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.184 [2024-11-28 08:29:28.351805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.184 [2024-11-28 08:29:28.352282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.184 [2024-11-28 08:29:28.352312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.184 [2024-11-28 08:29:28.352321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.184 [2024-11-28 08:29:28.352490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.184 [2024-11-28 08:29:28.352643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.184 [2024-11-28 08:29:28.352649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.184 [2024-11-28 08:29:28.352654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.184 [2024-11-28 08:29:28.352660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.184 [2024-11-28 08:29:28.365613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.184 5566.80 IOPS, 21.75 MiB/s [2024-11-28T07:29:28.473Z] [2024-11-28 08:29:28.366031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.184 [2024-11-28 08:29:28.366061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.184 [2024-11-28 08:29:28.366069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.184 [2024-11-28 08:29:28.366241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.184 [2024-11-28 08:29:28.366396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.184 [2024-11-28 08:29:28.366402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.184 [2024-11-28 08:29:28.366407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.184 [2024-11-28 08:29:28.366412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.184 [2024-11-28 08:29:28.378242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.184 [2024-11-28 08:29:28.378770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.184 [2024-11-28 08:29:28.378801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.184 [2024-11-28 08:29:28.378812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.184 [2024-11-28 08:29:28.378978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.184 [2024-11-28 08:29:28.379131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.184 [2024-11-28 08:29:28.379137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.184 [2024-11-28 08:29:28.379142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.184 [2024-11-28 08:29:28.379148] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.184 [2024-11-28 08:29:28.390974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.184 [2024-11-28 08:29:28.391395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.184 [2024-11-28 08:29:28.391411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.184 [2024-11-28 08:29:28.391417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.184 [2024-11-28 08:29:28.391567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.184 [2024-11-28 08:29:28.391717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.184 [2024-11-28 08:29:28.391723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.184 [2024-11-28 08:29:28.391728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.184 [2024-11-28 08:29:28.391733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.184 [2024-11-28 08:29:28.403685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.184 [2024-11-28 08:29:28.404139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.184 [2024-11-28 08:29:28.404152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.184 [2024-11-28 08:29:28.404161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.184 [2024-11-28 08:29:28.404312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.184 [2024-11-28 08:29:28.404462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.184 [2024-11-28 08:29:28.404468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.184 [2024-11-28 08:29:28.404473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.184 [2024-11-28 08:29:28.404478] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.184 [2024-11-28 08:29:28.416293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.184 [2024-11-28 08:29:28.416834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.184 [2024-11-28 08:29:28.416865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.184 [2024-11-28 08:29:28.416874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.184 [2024-11-28 08:29:28.417042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.184 [2024-11-28 08:29:28.417205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.184 [2024-11-28 08:29:28.417213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.184 [2024-11-28 08:29:28.417219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.184 [2024-11-28 08:29:28.417226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.184 [2024-11-28 08:29:28.428901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.184 [2024-11-28 08:29:28.429361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.184 [2024-11-28 08:29:28.429377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.184 [2024-11-28 08:29:28.429382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.184 [2024-11-28 08:29:28.429532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.184 [2024-11-28 08:29:28.429682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.184 [2024-11-28 08:29:28.429688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.184 [2024-11-28 08:29:28.429693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.184 [2024-11-28 08:29:28.429698] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.184 [2024-11-28 08:29:28.441519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.184 [2024-11-28 08:29:28.441958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.184 [2024-11-28 08:29:28.441971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.184 [2024-11-28 08:29:28.441977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.184 [2024-11-28 08:29:28.442126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.184 [2024-11-28 08:29:28.442282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.185 [2024-11-28 08:29:28.442288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.185 [2024-11-28 08:29:28.442294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.185 [2024-11-28 08:29:28.442298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.185 [2024-11-28 08:29:28.454108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.185 [2024-11-28 08:29:28.454694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.185 [2024-11-28 08:29:28.454724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.185 [2024-11-28 08:29:28.454733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.185 [2024-11-28 08:29:28.454898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.185 [2024-11-28 08:29:28.455051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.185 [2024-11-28 08:29:28.455058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.185 [2024-11-28 08:29:28.455067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.185 [2024-11-28 08:29:28.455073] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.185 [2024-11-28 08:29:28.466765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.185 [2024-11-28 08:29:28.467212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.185 [2024-11-28 08:29:28.467228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.185 [2024-11-28 08:29:28.467233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.185 [2024-11-28 08:29:28.467383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.185 [2024-11-28 08:29:28.467533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.185 [2024-11-28 08:29:28.467539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.185 [2024-11-28 08:29:28.467545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.185 [2024-11-28 08:29:28.467550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.447 [2024-11-28 08:29:28.479445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.447 [2024-11-28 08:29:28.479913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.447 [2024-11-28 08:29:28.479927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.448 [2024-11-28 08:29:28.479933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.448 [2024-11-28 08:29:28.480082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.448 [2024-11-28 08:29:28.480238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.448 [2024-11-28 08:29:28.480244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.448 [2024-11-28 08:29:28.480250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.448 [2024-11-28 08:29:28.480254] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.448 [2024-11-28 08:29:28.492060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.448 [2024-11-28 08:29:28.492642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.448 [2024-11-28 08:29:28.492673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.448 [2024-11-28 08:29:28.492682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.448 [2024-11-28 08:29:28.492847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.448 [2024-11-28 08:29:28.493000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.448 [2024-11-28 08:29:28.493006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.448 [2024-11-28 08:29:28.493012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.448 [2024-11-28 08:29:28.493017] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.448 [2024-11-28 08:29:28.504710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.448 [2024-11-28 08:29:28.505192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.448 [2024-11-28 08:29:28.505207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.448 [2024-11-28 08:29:28.505213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.448 [2024-11-28 08:29:28.505363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.448 [2024-11-28 08:29:28.505512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.448 [2024-11-28 08:29:28.505518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.448 [2024-11-28 08:29:28.505523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.448 [2024-11-28 08:29:28.505528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.448 [2024-11-28 08:29:28.517340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.448 [2024-11-28 08:29:28.517794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.448 [2024-11-28 08:29:28.517807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.448 [2024-11-28 08:29:28.517812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.448 [2024-11-28 08:29:28.517961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.448 [2024-11-28 08:29:28.518111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.448 [2024-11-28 08:29:28.518117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.448 [2024-11-28 08:29:28.518122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.448 [2024-11-28 08:29:28.518127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.448 [2024-11-28 08:29:28.529947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.448 [2024-11-28 08:29:28.530301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.448 [2024-11-28 08:29:28.530314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.448 [2024-11-28 08:29:28.530319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.448 [2024-11-28 08:29:28.530469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.448 [2024-11-28 08:29:28.530619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.448 [2024-11-28 08:29:28.530626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.448 [2024-11-28 08:29:28.530631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.448 [2024-11-28 08:29:28.530636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.448 [2024-11-28 08:29:28.542609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.448 [2024-11-28 08:29:28.543064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.448 [2024-11-28 08:29:28.543076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.448 [2024-11-28 08:29:28.543084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.448 [2024-11-28 08:29:28.543238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.448 [2024-11-28 08:29:28.543388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.448 [2024-11-28 08:29:28.543394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.448 [2024-11-28 08:29:28.543400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.448 [2024-11-28 08:29:28.543405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.448 [2024-11-28 08:29:28.555224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.448 [2024-11-28 08:29:28.555638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.448 [2024-11-28 08:29:28.555650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.448 [2024-11-28 08:29:28.555655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.448 [2024-11-28 08:29:28.555805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.448 [2024-11-28 08:29:28.555954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.448 [2024-11-28 08:29:28.555960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.448 [2024-11-28 08:29:28.555964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.448 [2024-11-28 08:29:28.555969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.448 [2024-11-28 08:29:28.567939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.448 [2024-11-28 08:29:28.568367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.448 [2024-11-28 08:29:28.568379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.448 [2024-11-28 08:29:28.568385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.448 [2024-11-28 08:29:28.568534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.448 [2024-11-28 08:29:28.568684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.448 [2024-11-28 08:29:28.568690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.448 [2024-11-28 08:29:28.568694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.448 [2024-11-28 08:29:28.568699] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.448 [2024-11-28 08:29:28.580667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.448 [2024-11-28 08:29:28.581071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.448 [2024-11-28 08:29:28.581083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.448 [2024-11-28 08:29:28.581088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.448 [2024-11-28 08:29:28.581241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.448 [2024-11-28 08:29:28.581394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.448 [2024-11-28 08:29:28.581400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.448 [2024-11-28 08:29:28.581405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.448 [2024-11-28 08:29:28.581410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.448 [2024-11-28 08:29:28.593365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.448 [2024-11-28 08:29:28.593883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.448 [2024-11-28 08:29:28.593913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.448 [2024-11-28 08:29:28.593922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.448 [2024-11-28 08:29:28.594087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.448 [2024-11-28 08:29:28.594247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.448 [2024-11-28 08:29:28.594254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.448 [2024-11-28 08:29:28.594261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.448 [2024-11-28 08:29:28.594267] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.449 [2024-11-28 08:29:28.606076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.449 [2024-11-28 08:29:28.606539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.449 [2024-11-28 08:29:28.606554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.449 [2024-11-28 08:29:28.606559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.449 [2024-11-28 08:29:28.606709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.449 [2024-11-28 08:29:28.606859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.449 [2024-11-28 08:29:28.606865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.449 [2024-11-28 08:29:28.606870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.449 [2024-11-28 08:29:28.606875] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.449 [2024-11-28 08:29:28.618699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.449 [2024-11-28 08:29:28.619250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.449 [2024-11-28 08:29:28.619281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.449 [2024-11-28 08:29:28.619289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.449 [2024-11-28 08:29:28.619455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.449 [2024-11-28 08:29:28.619608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.449 [2024-11-28 08:29:28.619614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.449 [2024-11-28 08:29:28.619623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.449 [2024-11-28 08:29:28.619629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.449 [2024-11-28 08:29:28.631319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.449 [2024-11-28 08:29:28.631865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.449 [2024-11-28 08:29:28.631895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.449 [2024-11-28 08:29:28.631904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.449 [2024-11-28 08:29:28.632069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.449 [2024-11-28 08:29:28.632228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.449 [2024-11-28 08:29:28.632234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.449 [2024-11-28 08:29:28.632240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.449 [2024-11-28 08:29:28.632246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.449 [2024-11-28 08:29:28.643932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.449 [2024-11-28 08:29:28.644505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.449 [2024-11-28 08:29:28.644536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.449 [2024-11-28 08:29:28.644544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.449 [2024-11-28 08:29:28.644709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.449 [2024-11-28 08:29:28.644862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.449 [2024-11-28 08:29:28.644868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.449 [2024-11-28 08:29:28.644874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.449 [2024-11-28 08:29:28.644879] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.449 [2024-11-28 08:29:28.656559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.449 [2024-11-28 08:29:28.657021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.449 [2024-11-28 08:29:28.657036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.449 [2024-11-28 08:29:28.657041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.449 [2024-11-28 08:29:28.657196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.449 [2024-11-28 08:29:28.657347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.449 [2024-11-28 08:29:28.657353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.449 [2024-11-28 08:29:28.657358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.449 [2024-11-28 08:29:28.657363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.449 [2024-11-28 08:29:28.669272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.449 [2024-11-28 08:29:28.669832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.449 [2024-11-28 08:29:28.669862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.449 [2024-11-28 08:29:28.669870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.449 [2024-11-28 08:29:28.670035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.449 [2024-11-28 08:29:28.670195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.449 [2024-11-28 08:29:28.670202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.449 [2024-11-28 08:29:28.670207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.449 [2024-11-28 08:29:28.670213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.449 [2024-11-28 08:29:28.681894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.449 [2024-11-28 08:29:28.682480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.449 [2024-11-28 08:29:28.682511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.449 [2024-11-28 08:29:28.682520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.449 [2024-11-28 08:29:28.682685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.449 [2024-11-28 08:29:28.682838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.449 [2024-11-28 08:29:28.682844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.449 [2024-11-28 08:29:28.682850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.449 [2024-11-28 08:29:28.682855] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.449 [2024-11-28 08:29:28.694544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.449 [2024-11-28 08:29:28.695071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.449 [2024-11-28 08:29:28.695100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.449 [2024-11-28 08:29:28.695109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.449 [2024-11-28 08:29:28.695284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.449 [2024-11-28 08:29:28.695437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.449 [2024-11-28 08:29:28.695444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.449 [2024-11-28 08:29:28.695450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.449 [2024-11-28 08:29:28.695455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.449 [2024-11-28 08:29:28.707138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.449 [2024-11-28 08:29:28.707680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.449 [2024-11-28 08:29:28.707711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.449 [2024-11-28 08:29:28.707723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.449 [2024-11-28 08:29:28.707888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.449 [2024-11-28 08:29:28.708041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.449 [2024-11-28 08:29:28.708047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.449 [2024-11-28 08:29:28.708053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.449 [2024-11-28 08:29:28.708058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.449 [2024-11-28 08:29:28.719744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.449 [2024-11-28 08:29:28.720204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.449 [2024-11-28 08:29:28.720220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.449 [2024-11-28 08:29:28.720225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.449 [2024-11-28 08:29:28.720375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.449 [2024-11-28 08:29:28.720526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.449 [2024-11-28 08:29:28.720532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.449 [2024-11-28 08:29:28.720537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.450 [2024-11-28 08:29:28.720542] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.450 [2024-11-28 08:29:28.732358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.450 [2024-11-28 08:29:28.732817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.450 [2024-11-28 08:29:28.732829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.450 [2024-11-28 08:29:28.732835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.450 [2024-11-28 08:29:28.732984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.713 [2024-11-28 08:29:28.733133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.713 [2024-11-28 08:29:28.733141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.713 [2024-11-28 08:29:28.733147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.713 [2024-11-28 08:29:28.733153] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.713 [2024-11-28 08:29:28.744980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.713 [2024-11-28 08:29:28.745456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.713 [2024-11-28 08:29:28.745469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.713 [2024-11-28 08:29:28.745475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.713 [2024-11-28 08:29:28.745624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.713 [2024-11-28 08:29:28.745778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.713 [2024-11-28 08:29:28.745783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.713 [2024-11-28 08:29:28.745788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.713 [2024-11-28 08:29:28.745793] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.713 [2024-11-28 08:29:28.757610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.713 [2024-11-28 08:29:28.758059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.713 [2024-11-28 08:29:28.758072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.713 [2024-11-28 08:29:28.758077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.713 [2024-11-28 08:29:28.758231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.713 [2024-11-28 08:29:28.758381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.713 [2024-11-28 08:29:28.758387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.713 [2024-11-28 08:29:28.758393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.713 [2024-11-28 08:29:28.758397] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.713 [2024-11-28 08:29:28.770226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.713 [2024-11-28 08:29:28.770643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.713 [2024-11-28 08:29:28.770655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.713 [2024-11-28 08:29:28.770660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.713 [2024-11-28 08:29:28.770810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.713 [2024-11-28 08:29:28.770959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.713 [2024-11-28 08:29:28.770965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.713 [2024-11-28 08:29:28.770970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.713 [2024-11-28 08:29:28.770975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.713 [2024-11-28 08:29:28.782939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.713 [2024-11-28 08:29:28.783342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.713 [2024-11-28 08:29:28.783373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.713 [2024-11-28 08:29:28.783382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.713 [2024-11-28 08:29:28.783550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.713 [2024-11-28 08:29:28.783703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.713 [2024-11-28 08:29:28.783709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.713 [2024-11-28 08:29:28.783718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.713 [2024-11-28 08:29:28.783724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.713 [2024-11-28 08:29:28.795566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.713 [2024-11-28 08:29:28.795990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.713 [2024-11-28 08:29:28.796006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.713 [2024-11-28 08:29:28.796012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.713 [2024-11-28 08:29:28.796169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.713 [2024-11-28 08:29:28.796320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.713 [2024-11-28 08:29:28.796328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.713 [2024-11-28 08:29:28.796334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.713 [2024-11-28 08:29:28.796339] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.713 [2024-11-28 08:29:28.808293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.713 [2024-11-28 08:29:28.808829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.713 [2024-11-28 08:29:28.808859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.713 [2024-11-28 08:29:28.808867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.713 [2024-11-28 08:29:28.809033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.713 [2024-11-28 08:29:28.809192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.713 [2024-11-28 08:29:28.809199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.713 [2024-11-28 08:29:28.809205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.713 [2024-11-28 08:29:28.809211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.713 [2024-11-28 08:29:28.820894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.713 [2024-11-28 08:29:28.821473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.713 [2024-11-28 08:29:28.821503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.713 [2024-11-28 08:29:28.821511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.713 [2024-11-28 08:29:28.821676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.713 [2024-11-28 08:29:28.821829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.714 [2024-11-28 08:29:28.821835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.714 [2024-11-28 08:29:28.821840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.714 [2024-11-28 08:29:28.821846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.714 [2024-11-28 08:29:28.833539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.714 [2024-11-28 08:29:28.834000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.714 [2024-11-28 08:29:28.834016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.714 [2024-11-28 08:29:28.834021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.714 [2024-11-28 08:29:28.834176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.714 [2024-11-28 08:29:28.834327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.714 [2024-11-28 08:29:28.834332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.714 [2024-11-28 08:29:28.834337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.714 [2024-11-28 08:29:28.834342] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.714 [2024-11-28 08:29:28.846174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.714 [2024-11-28 08:29:28.846585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.714 [2024-11-28 08:29:28.846598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.714 [2024-11-28 08:29:28.846603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.714 [2024-11-28 08:29:28.846753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.714 [2024-11-28 08:29:28.846902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.714 [2024-11-28 08:29:28.846908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.714 [2024-11-28 08:29:28.846913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.714 [2024-11-28 08:29:28.846918] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.714 [2024-11-28 08:29:28.858888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.714 [2024-11-28 08:29:28.859308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.714 [2024-11-28 08:29:28.859322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.714 [2024-11-28 08:29:28.859327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.714 [2024-11-28 08:29:28.859477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.714 [2024-11-28 08:29:28.859626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.714 [2024-11-28 08:29:28.859632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.714 [2024-11-28 08:29:28.859637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.714 [2024-11-28 08:29:28.859642] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.714 [2024-11-28 08:29:28.871477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.714 [2024-11-28 08:29:28.872020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.714 [2024-11-28 08:29:28.872050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.714 [2024-11-28 08:29:28.872063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.714 [2024-11-28 08:29:28.872234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.714 [2024-11-28 08:29:28.872388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.714 [2024-11-28 08:29:28.872394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.714 [2024-11-28 08:29:28.872399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.714 [2024-11-28 08:29:28.872405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.714 [2024-11-28 08:29:28.884088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.714 [2024-11-28 08:29:28.884659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.714 [2024-11-28 08:29:28.884690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.714 [2024-11-28 08:29:28.884700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.714 [2024-11-28 08:29:28.884865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.714 [2024-11-28 08:29:28.885017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.714 [2024-11-28 08:29:28.885024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.714 [2024-11-28 08:29:28.885029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.714 [2024-11-28 08:29:28.885035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.714 [2024-11-28 08:29:28.896722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.714 [2024-11-28 08:29:28.897109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.714 [2024-11-28 08:29:28.897124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.714 [2024-11-28 08:29:28.897129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.714 [2024-11-28 08:29:28.897284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.714 [2024-11-28 08:29:28.897435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.714 [2024-11-28 08:29:28.897441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.714 [2024-11-28 08:29:28.897446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.714 [2024-11-28 08:29:28.897451] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.714 [2024-11-28 08:29:28.909414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.714 [2024-11-28 08:29:28.909985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.714 [2024-11-28 08:29:28.910015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.714 [2024-11-28 08:29:28.910024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.714 [2024-11-28 08:29:28.910197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.714 [2024-11-28 08:29:28.910355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.714 [2024-11-28 08:29:28.910362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.714 [2024-11-28 08:29:28.910367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.714 [2024-11-28 08:29:28.910372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.714 [2024-11-28 08:29:28.922054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.714 [2024-11-28 08:29:28.922587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.714 [2024-11-28 08:29:28.922603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.714 [2024-11-28 08:29:28.922609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.714 [2024-11-28 08:29:28.922758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.714 [2024-11-28 08:29:28.922908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.714 [2024-11-28 08:29:28.922914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.714 [2024-11-28 08:29:28.922919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.714 [2024-11-28 08:29:28.922924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.714 [2024-11-28 08:29:28.934741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.714 [2024-11-28 08:29:28.935222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.714 [2024-11-28 08:29:28.935235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.714 [2024-11-28 08:29:28.935241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.714 [2024-11-28 08:29:28.935390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.714 [2024-11-28 08:29:28.935540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.714 [2024-11-28 08:29:28.935546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.714 [2024-11-28 08:29:28.935551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.714 [2024-11-28 08:29:28.935556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.714 [2024-11-28 08:29:28.947374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.714 [2024-11-28 08:29:28.947828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.714 [2024-11-28 08:29:28.947840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.715 [2024-11-28 08:29:28.947845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.715 [2024-11-28 08:29:28.947995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.715 [2024-11-28 08:29:28.948144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.715 [2024-11-28 08:29:28.948150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.715 [2024-11-28 08:29:28.948165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.715 [2024-11-28 08:29:28.948170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.715 [2024-11-28 08:29:28.959996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.715 [2024-11-28 08:29:28.960529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.715 [2024-11-28 08:29:28.960559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.715 [2024-11-28 08:29:28.960568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.715 [2024-11-28 08:29:28.960734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.715 [2024-11-28 08:29:28.960886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.715 [2024-11-28 08:29:28.960892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.715 [2024-11-28 08:29:28.960898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.715 [2024-11-28 08:29:28.960903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.715 [2024-11-28 08:29:28.972594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.715 [2024-11-28 08:29:28.972937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.715 [2024-11-28 08:29:28.972954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.715 [2024-11-28 08:29:28.972960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.715 [2024-11-28 08:29:28.973112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.715 [2024-11-28 08:29:28.973269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.715 [2024-11-28 08:29:28.973277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.715 [2024-11-28 08:29:28.973282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.715 [2024-11-28 08:29:28.973287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.715 [2024-11-28 08:29:28.985247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2153352 Killed "${NVMF_APP[@]}" "$@" 00:30:31.715 [2024-11-28 08:29:28.985840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.715 [2024-11-28 08:29:28.985871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.715 [2024-11-28 08:29:28.985880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.715 [2024-11-28 08:29:28.986045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.715 [2024-11-28 08:29:28.986203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.715 [2024-11-28 08:29:28.986211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.715 [2024-11-28 08:29:28.986216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.715 [2024-11-28 08:29:28.986225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.715 08:29:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:30:31.715 08:29:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:31.715 08:29:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:31.715 08:29:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:31.715 08:29:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:31.715 08:29:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2155011 00:30:31.715 08:29:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2155011 00:30:31.715 08:29:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:31.715 08:29:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2155011 ']' 00:30:31.715 08:29:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:31.715 08:29:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:31.715 08:29:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:31.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:31.715 08:29:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:31.715 08:29:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:31.715 [2024-11-28 08:29:28.997899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.979 [2024-11-28 08:29:28.998260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.979 [2024-11-28 08:29:28.998278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.979 [2024-11-28 08:29:28.998285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.979 [2024-11-28 08:29:28.998435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.979 [2024-11-28 08:29:28.998586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.979 [2024-11-28 08:29:28.998592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.979 [2024-11-28 08:29:28.998597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.979 [2024-11-28 08:29:28.998603] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.979 [2024-11-28 08:29:29.010578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.979 [2024-11-28 08:29:29.011214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.979 [2024-11-28 08:29:29.011245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.979 [2024-11-28 08:29:29.011254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.979 [2024-11-28 08:29:29.011422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.979 [2024-11-28 08:29:29.011575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.979 [2024-11-28 08:29:29.011581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.979 [2024-11-28 08:29:29.011586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.979 [2024-11-28 08:29:29.011596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.979 [2024-11-28 08:29:29.023289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.979 [2024-11-28 08:29:29.023933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.979 [2024-11-28 08:29:29.023963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.979 [2024-11-28 08:29:29.023972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.979 [2024-11-28 08:29:29.024138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.979 [2024-11-28 08:29:29.024297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.979 [2024-11-28 08:29:29.024304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.979 [2024-11-28 08:29:29.024309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.979 [2024-11-28 08:29:29.024315] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.979 [2024-11-28 08:29:29.036002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.979 [2024-11-28 08:29:29.036495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.979 [2024-11-28 08:29:29.036510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.979 [2024-11-28 08:29:29.036516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.979 [2024-11-28 08:29:29.036666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.979 [2024-11-28 08:29:29.036817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.979 [2024-11-28 08:29:29.036824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.979 [2024-11-28 08:29:29.036828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.979 [2024-11-28 08:29:29.036833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.979 [2024-11-28 08:29:29.048673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.979 [2024-11-28 08:29:29.049061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.979 [2024-11-28 08:29:29.049074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.979 [2024-11-28 08:29:29.049080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.979 [2024-11-28 08:29:29.049233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.979 [2024-11-28 08:29:29.049383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.979 [2024-11-28 08:29:29.049390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.979 [2024-11-28 08:29:29.049395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.979 [2024-11-28 08:29:29.049400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.979 [2024-11-28 08:29:29.049397] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:30:31.979 [2024-11-28 08:29:29.049440] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:31.979 [2024-11-28 08:29:29.061369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.979 [2024-11-28 08:29:29.061945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.979 [2024-11-28 08:29:29.061975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.979 [2024-11-28 08:29:29.061984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.979 [2024-11-28 08:29:29.062150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.979 [2024-11-28 08:29:29.062308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.979 [2024-11-28 08:29:29.062315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.979 [2024-11-28 08:29:29.062322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.979 [2024-11-28 08:29:29.062327] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.979 [2024-11-28 08:29:29.074027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.979 [2024-11-28 08:29:29.074500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.980 [2024-11-28 08:29:29.074516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.980 [2024-11-28 08:29:29.074521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.980 [2024-11-28 08:29:29.074671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.980 [2024-11-28 08:29:29.074822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.980 [2024-11-28 08:29:29.074827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.980 [2024-11-28 08:29:29.074832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.980 [2024-11-28 08:29:29.074837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.980 [2024-11-28 08:29:29.086741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.980 [2024-11-28 08:29:29.087197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.980 [2024-11-28 08:29:29.087213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.980 [2024-11-28 08:29:29.087218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.980 [2024-11-28 08:29:29.087368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.980 [2024-11-28 08:29:29.087518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.980 [2024-11-28 08:29:29.087524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.980 [2024-11-28 08:29:29.087529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.980 [2024-11-28 08:29:29.087535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.980 [2024-11-28 08:29:29.099374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.980 [2024-11-28 08:29:29.099835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.980 [2024-11-28 08:29:29.099848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.980 [2024-11-28 08:29:29.099855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.980 [2024-11-28 08:29:29.100005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.980 [2024-11-28 08:29:29.100155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.980 [2024-11-28 08:29:29.100167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.980 [2024-11-28 08:29:29.100173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.980 [2024-11-28 08:29:29.100177] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.980 [2024-11-28 08:29:29.112006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.980 [2024-11-28 08:29:29.112494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.980 [2024-11-28 08:29:29.112507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.980 [2024-11-28 08:29:29.112512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.980 [2024-11-28 08:29:29.112662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.980 [2024-11-28 08:29:29.112812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.980 [2024-11-28 08:29:29.112818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.980 [2024-11-28 08:29:29.112823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.980 [2024-11-28 08:29:29.112828] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.980 [2024-11-28 08:29:29.124647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.980 [2024-11-28 08:29:29.125137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.980 [2024-11-28 08:29:29.125149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.980 [2024-11-28 08:29:29.125155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.980 [2024-11-28 08:29:29.125309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.980 [2024-11-28 08:29:29.125459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.980 [2024-11-28 08:29:29.125464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.980 [2024-11-28 08:29:29.125470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.980 [2024-11-28 08:29:29.125474] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.980 [2024-11-28 08:29:29.137284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.980 [2024-11-28 08:29:29.137792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.980 [2024-11-28 08:29:29.137805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.980 [2024-11-28 08:29:29.137810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.980 [2024-11-28 08:29:29.137963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.980 [2024-11-28 08:29:29.138113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.980 [2024-11-28 08:29:29.138118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.980 [2024-11-28 08:29:29.138123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.980 [2024-11-28 08:29:29.138128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.980 [2024-11-28 08:29:29.140809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:31.980 [2024-11-28 08:29:29.149973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.980 [2024-11-28 08:29:29.150537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.980 [2024-11-28 08:29:29.150570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.980 [2024-11-28 08:29:29.150580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.980 [2024-11-28 08:29:29.150752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.980 [2024-11-28 08:29:29.150906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.980 [2024-11-28 08:29:29.150912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.980 [2024-11-28 08:29:29.150918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.980 [2024-11-28 08:29:29.150924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.980 [2024-11-28 08:29:29.162617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.980 [2024-11-28 08:29:29.163236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.980 [2024-11-28 08:29:29.163268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.980 [2024-11-28 08:29:29.163277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.980 [2024-11-28 08:29:29.163447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.980 [2024-11-28 08:29:29.163610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.980 [2024-11-28 08:29:29.163619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.980 [2024-11-28 08:29:29.163624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.980 [2024-11-28 08:29:29.163630] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.980 [2024-11-28 08:29:29.170203] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:31.980 [2024-11-28 08:29:29.170223] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:31.980 [2024-11-28 08:29:29.170229] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:31.980 [2024-11-28 08:29:29.170235] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:31.980 [2024-11-28 08:29:29.170240] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:31.980 [2024-11-28 08:29:29.171276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:31.980 [2024-11-28 08:29:29.171516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:31.980 [2024-11-28 08:29:29.171517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:31.980 [2024-11-28 08:29:29.175315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.980 [2024-11-28 08:29:29.175882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.980 [2024-11-28 08:29:29.175913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.980 [2024-11-28 08:29:29.175922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.980 [2024-11-28 08:29:29.176089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.980 [2024-11-28 08:29:29.176248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.980 [2024-11-28 08:29:29.176256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.980 [2024-11-28 08:29:29.176261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.980 [2024-11-28 08:29:29.176267] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.980 [2024-11-28 08:29:29.187950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.980 [2024-11-28 08:29:29.188387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.981 [2024-11-28 08:29:29.188418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.981 [2024-11-28 08:29:29.188427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.981 [2024-11-28 08:29:29.188595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.981 [2024-11-28 08:29:29.188749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.981 [2024-11-28 08:29:29.188755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.981 [2024-11-28 08:29:29.188762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.981 [2024-11-28 08:29:29.188768] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.981 [2024-11-28 08:29:29.200599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.981 [2024-11-28 08:29:29.201180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.981 [2024-11-28 08:29:29.201210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.981 [2024-11-28 08:29:29.201220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.981 [2024-11-28 08:29:29.201387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.981 [2024-11-28 08:29:29.201540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.981 [2024-11-28 08:29:29.201547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.981 [2024-11-28 08:29:29.201552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.981 [2024-11-28 08:29:29.201558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.981 [2024-11-28 08:29:29.213243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.981 [2024-11-28 08:29:29.213739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.981 [2024-11-28 08:29:29.213770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.981 [2024-11-28 08:29:29.213779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.981 [2024-11-28 08:29:29.213945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.981 [2024-11-28 08:29:29.214098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.981 [2024-11-28 08:29:29.214105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.981 [2024-11-28 08:29:29.214110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.981 [2024-11-28 08:29:29.214116] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.981 [2024-11-28 08:29:29.225929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.981 [2024-11-28 08:29:29.226421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.981 [2024-11-28 08:29:29.226451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.981 [2024-11-28 08:29:29.226460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.981 [2024-11-28 08:29:29.226628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.981 [2024-11-28 08:29:29.226781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.981 [2024-11-28 08:29:29.226788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.981 [2024-11-28 08:29:29.226793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.981 [2024-11-28 08:29:29.226799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.981 [2024-11-28 08:29:29.238618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.981 [2024-11-28 08:29:29.239024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.981 [2024-11-28 08:29:29.239054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.981 [2024-11-28 08:29:29.239063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.981 [2024-11-28 08:29:29.239233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.981 [2024-11-28 08:29:29.239387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.981 [2024-11-28 08:29:29.239393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.981 [2024-11-28 08:29:29.239398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.981 [2024-11-28 08:29:29.239404] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.981 [2024-11-28 08:29:29.251240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.981 [2024-11-28 08:29:29.251726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.981 [2024-11-28 08:29:29.251756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:31.981 [2024-11-28 08:29:29.251768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:31.981 [2024-11-28 08:29:29.251934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:31.981 [2024-11-28 08:29:29.252086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.981 [2024-11-28 08:29:29.252093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.981 [2024-11-28 08:29:29.252098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.981 [2024-11-28 08:29:29.252103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.981 [2024-11-28 08:29:29.263941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.244 [2024-11-28 08:29:29.264362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.244 [2024-11-28 08:29:29.264393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.244 [2024-11-28 08:29:29.264402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.244 [2024-11-28 08:29:29.264571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.244 [2024-11-28 08:29:29.264724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.244 [2024-11-28 08:29:29.264731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.244 [2024-11-28 08:29:29.264736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.244 [2024-11-28 08:29:29.264742] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.244 [2024-11-28 08:29:29.276564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.244 [2024-11-28 08:29:29.277002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.244 [2024-11-28 08:29:29.277017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.244 [2024-11-28 08:29:29.277022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.245 [2024-11-28 08:29:29.277178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.245 [2024-11-28 08:29:29.277329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.245 [2024-11-28 08:29:29.277335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.245 [2024-11-28 08:29:29.277340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.245 [2024-11-28 08:29:29.277346] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.245 [2024-11-28 08:29:29.289163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.245 [2024-11-28 08:29:29.289683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.245 [2024-11-28 08:29:29.289713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.245 [2024-11-28 08:29:29.289722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.245 [2024-11-28 08:29:29.289891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.245 [2024-11-28 08:29:29.290048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.245 [2024-11-28 08:29:29.290055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.245 [2024-11-28 08:29:29.290060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.245 [2024-11-28 08:29:29.290066] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.245 [2024-11-28 08:29:29.301878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.245 [2024-11-28 08:29:29.302450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.245 [2024-11-28 08:29:29.302480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.245 [2024-11-28 08:29:29.302488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.245 [2024-11-28 08:29:29.302654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.245 [2024-11-28 08:29:29.302807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.245 [2024-11-28 08:29:29.302814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.245 [2024-11-28 08:29:29.302819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.245 [2024-11-28 08:29:29.302825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.245 [2024-11-28 08:29:29.314593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.245 [2024-11-28 08:29:29.315056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.245 [2024-11-28 08:29:29.315071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.245 [2024-11-28 08:29:29.315077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.245 [2024-11-28 08:29:29.315231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.245 [2024-11-28 08:29:29.315381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.245 [2024-11-28 08:29:29.315387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.245 [2024-11-28 08:29:29.315392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.245 [2024-11-28 08:29:29.315397] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.245 [2024-11-28 08:29:29.327194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.245 [2024-11-28 08:29:29.327656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.245 [2024-11-28 08:29:29.327669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.245 [2024-11-28 08:29:29.327674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.245 [2024-11-28 08:29:29.327824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.245 [2024-11-28 08:29:29.327973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.245 [2024-11-28 08:29:29.327979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.245 [2024-11-28 08:29:29.327984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.245 [2024-11-28 08:29:29.327993] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.245 [2024-11-28 08:29:29.339806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.245 [2024-11-28 08:29:29.340289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.245 [2024-11-28 08:29:29.340320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.245 [2024-11-28 08:29:29.340329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.245 [2024-11-28 08:29:29.340494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.245 [2024-11-28 08:29:29.340647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.245 [2024-11-28 08:29:29.340653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.245 [2024-11-28 08:29:29.340659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.245 [2024-11-28 08:29:29.340665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.245 [2024-11-28 08:29:29.352499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.245 [2024-11-28 08:29:29.352966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.245 [2024-11-28 08:29:29.352980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.245 [2024-11-28 08:29:29.352985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.245 [2024-11-28 08:29:29.353135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.245 [2024-11-28 08:29:29.353290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.245 [2024-11-28 08:29:29.353296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.245 [2024-11-28 08:29:29.353301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.245 [2024-11-28 08:29:29.353306] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.245 [2024-11-28 08:29:29.365115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.245 [2024-11-28 08:29:29.365679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.245 [2024-11-28 08:29:29.365709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.245 [2024-11-28 08:29:29.365718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.245 [2024-11-28 08:29:29.365883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.245 [2024-11-28 08:29:29.366036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.245 [2024-11-28 08:29:29.366042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.245 [2024-11-28 08:29:29.366048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.245 [2024-11-28 08:29:29.366053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.245 4639.00 IOPS, 18.12 MiB/s [2024-11-28T07:29:29.534Z] [2024-11-28 08:29:29.377749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.245 [2024-11-28 08:29:29.378292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.246 [2024-11-28 08:29:29.378322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.246 [2024-11-28 08:29:29.378331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.246 [2024-11-28 08:29:29.378500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.246 [2024-11-28 08:29:29.378653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.246 [2024-11-28 08:29:29.378659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.246 [2024-11-28 08:29:29.378665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.246 [2024-11-28 08:29:29.378670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.246 [2024-11-28 08:29:29.390349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.246 [2024-11-28 08:29:29.390909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.246 [2024-11-28 08:29:29.390939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.246 [2024-11-28 08:29:29.390947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.246 [2024-11-28 08:29:29.391113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.246 [2024-11-28 08:29:29.391272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.246 [2024-11-28 08:29:29.391280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.246 [2024-11-28 08:29:29.391285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.246 [2024-11-28 08:29:29.391291] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.246 [2024-11-28 08:29:29.402959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.246 [2024-11-28 08:29:29.403455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.246 [2024-11-28 08:29:29.403485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.246 [2024-11-28 08:29:29.403494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.246 [2024-11-28 08:29:29.403659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.246 [2024-11-28 08:29:29.403813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.246 [2024-11-28 08:29:29.403819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.246 [2024-11-28 08:29:29.403824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.246 [2024-11-28 08:29:29.403830] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.246 [2024-11-28 08:29:29.415656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.246 [2024-11-28 08:29:29.416287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.246 [2024-11-28 08:29:29.416317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.246 [2024-11-28 08:29:29.416330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.246 [2024-11-28 08:29:29.416497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.246 [2024-11-28 08:29:29.416649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.246 [2024-11-28 08:29:29.416656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.246 [2024-11-28 08:29:29.416661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.246 [2024-11-28 08:29:29.416667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.246 [2024-11-28 08:29:29.428346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.246 [2024-11-28 08:29:29.428916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.246 [2024-11-28 08:29:29.428946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.246 [2024-11-28 08:29:29.428954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.246 [2024-11-28 08:29:29.429120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.246 [2024-11-28 08:29:29.429278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.246 [2024-11-28 08:29:29.429285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.246 [2024-11-28 08:29:29.429291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.246 [2024-11-28 08:29:29.429297] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.246 [2024-11-28 08:29:29.440970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.246 [2024-11-28 08:29:29.441439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.246 [2024-11-28 08:29:29.441469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.246 [2024-11-28 08:29:29.441478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.246 [2024-11-28 08:29:29.441644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.246 [2024-11-28 08:29:29.441797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.246 [2024-11-28 08:29:29.441803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.246 [2024-11-28 08:29:29.441808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.246 [2024-11-28 08:29:29.441814] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.246 [2024-11-28 08:29:29.453642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.246 [2024-11-28 08:29:29.454107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.246 [2024-11-28 08:29:29.454122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.246 [2024-11-28 08:29:29.454127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.246 [2024-11-28 08:29:29.454281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.246 [2024-11-28 08:29:29.454435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.246 [2024-11-28 08:29:29.454441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.246 [2024-11-28 08:29:29.454446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.246 [2024-11-28 08:29:29.454451] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.246 [2024-11-28 08:29:29.466262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.246 [2024-11-28 08:29:29.466599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.246 [2024-11-28 08:29:29.466612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.246 [2024-11-28 08:29:29.466617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.246 [2024-11-28 08:29:29.466767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.246 [2024-11-28 08:29:29.466917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.246 [2024-11-28 08:29:29.466923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.247 [2024-11-28 08:29:29.466928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.247 [2024-11-28 08:29:29.466932] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.247 [2024-11-28 08:29:29.478877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.247 [2024-11-28 08:29:29.479443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.247 [2024-11-28 08:29:29.479472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.247 [2024-11-28 08:29:29.479481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.247 [2024-11-28 08:29:29.479646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.247 [2024-11-28 08:29:29.479799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.247 [2024-11-28 08:29:29.479805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.247 [2024-11-28 08:29:29.479811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.247 [2024-11-28 08:29:29.479817] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.247 [2024-11-28 08:29:29.491492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.247 [2024-11-28 08:29:29.492050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.247 [2024-11-28 08:29:29.492080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.247 [2024-11-28 08:29:29.492089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.247 [2024-11-28 08:29:29.492260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.247 [2024-11-28 08:29:29.492414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.247 [2024-11-28 08:29:29.492420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.247 [2024-11-28 08:29:29.492429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.247 [2024-11-28 08:29:29.492435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.247 [2024-11-28 08:29:29.504109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.247 [2024-11-28 08:29:29.504670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.247 [2024-11-28 08:29:29.504700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.247 [2024-11-28 08:29:29.504709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.247 [2024-11-28 08:29:29.504875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.247 [2024-11-28 08:29:29.505028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.247 [2024-11-28 08:29:29.505034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.247 [2024-11-28 08:29:29.505039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.247 [2024-11-28 08:29:29.505045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.247 [2024-11-28 08:29:29.516723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.247 [2024-11-28 08:29:29.517198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.247 [2024-11-28 08:29:29.517219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.247 [2024-11-28 08:29:29.517225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.247 [2024-11-28 08:29:29.517380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.247 [2024-11-28 08:29:29.517531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.247 [2024-11-28 08:29:29.517536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.247 [2024-11-28 08:29:29.517541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.247 [2024-11-28 08:29:29.517547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.247 [2024-11-28 08:29:29.529366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.247 [2024-11-28 08:29:29.529809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.247 [2024-11-28 08:29:29.529822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.247 [2024-11-28 08:29:29.529827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.511 [2024-11-28 08:29:29.529977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.511 [2024-11-28 08:29:29.530130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.511 [2024-11-28 08:29:29.530137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.511 [2024-11-28 08:29:29.530142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.511 [2024-11-28 08:29:29.530147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.511 [2024-11-28 08:29:29.541964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.511 [2024-11-28 08:29:29.542305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.511 [2024-11-28 08:29:29.542318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.511 [2024-11-28 08:29:29.542323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.511 [2024-11-28 08:29:29.542473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.511 [2024-11-28 08:29:29.542623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.511 [2024-11-28 08:29:29.542629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.511 [2024-11-28 08:29:29.542634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.511 [2024-11-28 08:29:29.542638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.511 [2024-11-28 08:29:29.554597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.511 [2024-11-28 08:29:29.555065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.511 [2024-11-28 08:29:29.555077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.511 [2024-11-28 08:29:29.555082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.511 [2024-11-28 08:29:29.555235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.511 [2024-11-28 08:29:29.555385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.511 [2024-11-28 08:29:29.555391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.511 [2024-11-28 08:29:29.555396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.511 [2024-11-28 08:29:29.555401] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.511 [2024-11-28 08:29:29.567219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.511 [2024-11-28 08:29:29.567677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.511 [2024-11-28 08:29:29.567689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.511 [2024-11-28 08:29:29.567694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.511 [2024-11-28 08:29:29.567843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.511 [2024-11-28 08:29:29.567993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.511 [2024-11-28 08:29:29.567999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.511 [2024-11-28 08:29:29.568004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.511 [2024-11-28 08:29:29.568009] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.511 [2024-11-28 08:29:29.579816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.511 [2024-11-28 08:29:29.580272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.511 [2024-11-28 08:29:29.580285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.511 [2024-11-28 08:29:29.580293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.511 [2024-11-28 08:29:29.580442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.511 [2024-11-28 08:29:29.580592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.511 [2024-11-28 08:29:29.580598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.511 [2024-11-28 08:29:29.580603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.511 [2024-11-28 08:29:29.580607] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.511 [2024-11-28 08:29:29.592423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.511 [2024-11-28 08:29:29.593014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.511 [2024-11-28 08:29:29.593044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.511 [2024-11-28 08:29:29.593052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.511 [2024-11-28 08:29:29.593223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.511 [2024-11-28 08:29:29.593377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.511 [2024-11-28 08:29:29.593384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.511 [2024-11-28 08:29:29.593389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.511 [2024-11-28 08:29:29.593395] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.511 [2024-11-28 08:29:29.605066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.511 [2024-11-28 08:29:29.605609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.511 [2024-11-28 08:29:29.605639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.511 [2024-11-28 08:29:29.605647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.511 [2024-11-28 08:29:29.605813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.511 [2024-11-28 08:29:29.605966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.511 [2024-11-28 08:29:29.605972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.511 [2024-11-28 08:29:29.605978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.511 [2024-11-28 08:29:29.605983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.511 [2024-11-28 08:29:29.617660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.511 [2024-11-28 08:29:29.618248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.511 [2024-11-28 08:29:29.618278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.512 [2024-11-28 08:29:29.618287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.512 [2024-11-28 08:29:29.618455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.512 [2024-11-28 08:29:29.618614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.512 [2024-11-28 08:29:29.618622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.512 [2024-11-28 08:29:29.618628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.512 [2024-11-28 08:29:29.618633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.512 [2024-11-28 08:29:29.630323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.512 [2024-11-28 08:29:29.630881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.512 [2024-11-28 08:29:29.630911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.512 [2024-11-28 08:29:29.630920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.512 [2024-11-28 08:29:29.631085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.512 [2024-11-28 08:29:29.631243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.512 [2024-11-28 08:29:29.631250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.512 [2024-11-28 08:29:29.631256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.512 [2024-11-28 08:29:29.631261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.512 [2024-11-28 08:29:29.642933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.512 [2024-11-28 08:29:29.643485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.512 [2024-11-28 08:29:29.643516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.512 [2024-11-28 08:29:29.643525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.512 [2024-11-28 08:29:29.643690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.512 [2024-11-28 08:29:29.643843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.512 [2024-11-28 08:29:29.643849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.512 [2024-11-28 08:29:29.643854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.512 [2024-11-28 08:29:29.643860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.512 [2024-11-28 08:29:29.655545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.512 [2024-11-28 08:29:29.656104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.512 [2024-11-28 08:29:29.656134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.512 [2024-11-28 08:29:29.656143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.512 [2024-11-28 08:29:29.656314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.512 [2024-11-28 08:29:29.656468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.512 [2024-11-28 08:29:29.656474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.512 [2024-11-28 08:29:29.656484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.512 [2024-11-28 08:29:29.656489] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.512 [2024-11-28 08:29:29.668178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.512 [2024-11-28 08:29:29.668715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.512 [2024-11-28 08:29:29.668745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.512 [2024-11-28 08:29:29.668753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.512 [2024-11-28 08:29:29.668919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.512 [2024-11-28 08:29:29.669072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.512 [2024-11-28 08:29:29.669078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.512 [2024-11-28 08:29:29.669083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.512 [2024-11-28 08:29:29.669089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.512 [2024-11-28 08:29:29.680778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.512 [2024-11-28 08:29:29.681298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.512 [2024-11-28 08:29:29.681329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.512 [2024-11-28 08:29:29.681337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.512 [2024-11-28 08:29:29.681505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.512 [2024-11-28 08:29:29.681659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.512 [2024-11-28 08:29:29.681665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.512 [2024-11-28 08:29:29.681670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.512 [2024-11-28 08:29:29.681676] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.512 [2024-11-28 08:29:29.693453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.512 [2024-11-28 08:29:29.693905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.512 [2024-11-28 08:29:29.693936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.512 [2024-11-28 08:29:29.693944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.512 [2024-11-28 08:29:29.694112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.512 [2024-11-28 08:29:29.694271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.512 [2024-11-28 08:29:29.694278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.512 [2024-11-28 08:29:29.694283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.512 [2024-11-28 08:29:29.694290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.512 [2024-11-28 08:29:29.706115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.512 [2024-11-28 08:29:29.706652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.512 [2024-11-28 08:29:29.706682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.512 [2024-11-28 08:29:29.706691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.512 [2024-11-28 08:29:29.706856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.512 [2024-11-28 08:29:29.707009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.512 [2024-11-28 08:29:29.707015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.512 [2024-11-28 08:29:29.707021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.512 [2024-11-28 08:29:29.707026] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.512 [2024-11-28 08:29:29.718711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.512 [2024-11-28 08:29:29.719248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.512 [2024-11-28 08:29:29.719278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.512 [2024-11-28 08:29:29.719286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.512 [2024-11-28 08:29:29.719453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.512 [2024-11-28 08:29:29.719606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.512 [2024-11-28 08:29:29.719613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.512 [2024-11-28 08:29:29.719619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.512 [2024-11-28 08:29:29.719624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.512 [2024-11-28 08:29:29.731313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.513 [2024-11-28 08:29:29.731875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.513 [2024-11-28 08:29:29.731905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.513 [2024-11-28 08:29:29.731914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.513 [2024-11-28 08:29:29.732079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.513 [2024-11-28 08:29:29.732239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.513 [2024-11-28 08:29:29.732246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.513 [2024-11-28 08:29:29.732251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.513 [2024-11-28 08:29:29.732257] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.513 [2024-11-28 08:29:29.743935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.513 [2024-11-28 08:29:29.744421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.513 [2024-11-28 08:29:29.744451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.513 [2024-11-28 08:29:29.744463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.513 [2024-11-28 08:29:29.744629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.513 [2024-11-28 08:29:29.744782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.513 [2024-11-28 08:29:29.744788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.513 [2024-11-28 08:29:29.744793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.513 [2024-11-28 08:29:29.744799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.513 [2024-11-28 08:29:29.756631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.513 [2024-11-28 08:29:29.757262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.513 [2024-11-28 08:29:29.757293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.513 [2024-11-28 08:29:29.757301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.513 [2024-11-28 08:29:29.757470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.513 [2024-11-28 08:29:29.757623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.513 [2024-11-28 08:29:29.757630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.513 [2024-11-28 08:29:29.757636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.513 [2024-11-28 08:29:29.757642] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.513 [2024-11-28 08:29:29.769347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.513 [2024-11-28 08:29:29.769874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.513 [2024-11-28 08:29:29.769905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.513 [2024-11-28 08:29:29.769913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.513 [2024-11-28 08:29:29.770079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.513 [2024-11-28 08:29:29.770240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.513 [2024-11-28 08:29:29.770248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.513 [2024-11-28 08:29:29.770254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.513 [2024-11-28 08:29:29.770260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.513 [2024-11-28 08:29:29.781940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.513 [2024-11-28 08:29:29.782538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.513 [2024-11-28 08:29:29.782568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.513 [2024-11-28 08:29:29.782577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.513 [2024-11-28 08:29:29.782745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.513 [2024-11-28 08:29:29.782903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.513 [2024-11-28 08:29:29.782910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.513 [2024-11-28 08:29:29.782916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.513 [2024-11-28 08:29:29.782922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.513 [2024-11-28 08:29:29.794601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.513 [2024-11-28 08:29:29.795080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.513 [2024-11-28 08:29:29.795094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.513 [2024-11-28 08:29:29.795100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.513 [2024-11-28 08:29:29.795254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.513 [2024-11-28 08:29:29.795406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.513 [2024-11-28 08:29:29.795413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.513 [2024-11-28 08:29:29.795418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.513 [2024-11-28 08:29:29.795423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.775 [2024-11-28 08:29:29.807236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.775 [2024-11-28 08:29:29.807692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.775 [2024-11-28 08:29:29.807704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.775 [2024-11-28 08:29:29.807710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.775 [2024-11-28 08:29:29.807859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.775 [2024-11-28 08:29:29.808009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.775 [2024-11-28 08:29:29.808015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.775 [2024-11-28 08:29:29.808020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.775 [2024-11-28 08:29:29.808025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.775 [2024-11-28 08:29:29.819852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.775 [2024-11-28 08:29:29.820441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.775 [2024-11-28 08:29:29.820472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.775 [2024-11-28 08:29:29.820480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.775 [2024-11-28 08:29:29.820646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.775 [2024-11-28 08:29:29.820799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.775 [2024-11-28 08:29:29.820805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.775 [2024-11-28 08:29:29.820814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.775 [2024-11-28 08:29:29.820820] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.775 [2024-11-28 08:29:29.832496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.775 [2024-11-28 08:29:29.832987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.775 [2024-11-28 08:29:29.833016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.775 [2024-11-28 08:29:29.833025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.775 [2024-11-28 08:29:29.833196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.775 [2024-11-28 08:29:29.833349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.775 [2024-11-28 08:29:29.833356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.775 [2024-11-28 08:29:29.833362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.775 [2024-11-28 08:29:29.833368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.775 [2024-11-28 08:29:29.845193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.775 [2024-11-28 08:29:29.845742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.775 [2024-11-28 08:29:29.845772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.775 [2024-11-28 08:29:29.845781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.775 [2024-11-28 08:29:29.845946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.775 [2024-11-28 08:29:29.846099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.775 [2024-11-28 08:29:29.846106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.775 [2024-11-28 08:29:29.846111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.775 [2024-11-28 08:29:29.846117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.775 08:29:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:32.775 08:29:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:30:32.775 08:29:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:32.775 08:29:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:32.775 08:29:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:32.775 [2024-11-28 08:29:29.857807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.775 [2024-11-28 08:29:29.858280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.775 [2024-11-28 08:29:29.858296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.775 [2024-11-28 08:29:29.858301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.775 [2024-11-28 08:29:29.858452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.775 [2024-11-28 08:29:29.858602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.775 [2024-11-28 08:29:29.858612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.775 [2024-11-28 08:29:29.858617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.775 [2024-11-28 08:29:29.858622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.775 [2024-11-28 08:29:29.870453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.775 [2024-11-28 08:29:29.870943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.775 [2024-11-28 08:29:29.870973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.775 [2024-11-28 08:29:29.870982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.775 [2024-11-28 08:29:29.871148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.775 [2024-11-28 08:29:29.871306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.775 [2024-11-28 08:29:29.871313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.775 [2024-11-28 08:29:29.871319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.775 [2024-11-28 08:29:29.871324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.775 [2024-11-28 08:29:29.883151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.775 [2024-11-28 08:29:29.883695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.775 [2024-11-28 08:29:29.883726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.776 [2024-11-28 08:29:29.883735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.776 [2024-11-28 08:29:29.883900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.776 [2024-11-28 08:29:29.884053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.776 [2024-11-28 08:29:29.884059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.776 [2024-11-28 08:29:29.884065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.776 [2024-11-28 08:29:29.884070] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.776 08:29:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:32.776 08:29:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:32.776 08:29:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.776 08:29:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:32.776 [2024-11-28 08:29:29.895753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.776 [2024-11-28 08:29:29.896281] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:32.776 [2024-11-28 08:29:29.896284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.776 [2024-11-28 08:29:29.896313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.776 [2024-11-28 08:29:29.896321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.776 [2024-11-28 08:29:29.896489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.776 [2024-11-28 08:29:29.896646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.776 [2024-11-28 08:29:29.896652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.776 [2024-11-28 08:29:29.896658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.776 [2024-11-28 08:29:29.896663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.776 08:29:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.776 08:29:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:32.776 08:29:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.776 08:29:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:32.776 [2024-11-28 08:29:29.908351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.776 [2024-11-28 08:29:29.908616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.776 [2024-11-28 08:29:29.908641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.776 [2024-11-28 08:29:29.908646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.776 [2024-11-28 08:29:29.908796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.776 [2024-11-28 08:29:29.908946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.776 [2024-11-28 08:29:29.908953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.776 [2024-11-28 08:29:29.908958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.776 [2024-11-28 08:29:29.908963] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.776 [2024-11-28 08:29:29.921052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.776 [2024-11-28 08:29:29.921629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.776 [2024-11-28 08:29:29.921660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.776 [2024-11-28 08:29:29.921668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.776 [2024-11-28 08:29:29.921834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.776 [2024-11-28 08:29:29.921987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.776 [2024-11-28 08:29:29.921993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.776 [2024-11-28 08:29:29.921998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.776 [2024-11-28 08:29:29.922004] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.776 [2024-11-28 08:29:29.933690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.776 [2024-11-28 08:29:29.934251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.776 [2024-11-28 08:29:29.934281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.776 [2024-11-28 08:29:29.934290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.776 [2024-11-28 08:29:29.934462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.776 [2024-11-28 08:29:29.934615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.776 [2024-11-28 08:29:29.934622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.776 [2024-11-28 08:29:29.934627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.776 [2024-11-28 08:29:29.934633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.776 Malloc0 00:30:32.776 08:29:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.776 08:29:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:32.776 08:29:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.776 08:29:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:32.776 [2024-11-28 08:29:29.946334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.776 [2024-11-28 08:29:29.946862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.776 [2024-11-28 08:29:29.946893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.776 [2024-11-28 08:29:29.946902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.776 [2024-11-28 08:29:29.947070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.776 [2024-11-28 08:29:29.947231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.776 [2024-11-28 08:29:29.947239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.776 [2024-11-28 08:29:29.947245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.776 [2024-11-28 08:29:29.947251] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.776 08:29:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.776 08:29:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:32.776 08:29:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.776 08:29:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:32.776 [2024-11-28 08:29:29.958961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.776 [2024-11-28 08:29:29.959534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.776 [2024-11-28 08:29:29.959564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e3e010 with addr=10.0.0.2, port=4420 00:30:32.776 [2024-11-28 08:29:29.959573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e3e010 is same with the state(6) to be set 00:30:32.776 [2024-11-28 08:29:29.959738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3e010 (9): Bad file descriptor 00:30:32.776 08:29:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.776 [2024-11-28 08:29:29.959891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:32.776 [2024-11-28 08:29:29.959898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:32.776 [2024-11-28 08:29:29.959904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:32.776 [2024-11-28 08:29:29.959910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:32.776 08:29:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:32.776 08:29:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.776 08:29:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:32.776 [2024-11-28 08:29:29.966921] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:32.776 [2024-11-28 08:29:29.971602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:32.776 08:29:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.776 08:29:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2153727 00:30:32.776 [2024-11-28 08:29:30.036071] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:30:34.291 4574.57 IOPS, 17.87 MiB/s [2024-11-28T07:29:32.524Z] 5608.12 IOPS, 21.91 MiB/s [2024-11-28T07:29:33.466Z] 6422.78 IOPS, 25.09 MiB/s [2024-11-28T07:29:34.424Z] 7069.50 IOPS, 27.62 MiB/s [2024-11-28T07:29:35.478Z] 7600.36 IOPS, 29.69 MiB/s [2024-11-28T07:29:36.421Z] 8042.17 IOPS, 31.41 MiB/s [2024-11-28T07:29:37.391Z] 8422.23 IOPS, 32.90 MiB/s [2024-11-28T07:29:38.773Z] 8748.93 IOPS, 34.18 MiB/s 00:30:41.484 Latency(us) 00:30:41.484 [2024-11-28T07:29:38.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:41.484 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:41.484 Verification LBA range: start 0x0 length 0x4000 00:30:41.484 Nvme1n1 : 15.00 9039.12 35.31 13538.15 0.00 5650.90 570.03 14308.69 00:30:41.484 [2024-11-28T07:29:38.773Z] =================================================================================================================== 00:30:41.484 [2024-11-28T07:29:38.773Z] Total : 9039.12 35.31 13538.15 0.00 5650.90 570.03 14308.69 00:30:41.484 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:30:41.484 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:41.484 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.484 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:41.484 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.484 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:41.484 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:41.484 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:41.484 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:30:41.484 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:41.484 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:30:41.484 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:41.484 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:41.484 rmmod nvme_tcp 00:30:41.484 rmmod nvme_fabrics 00:30:41.484 rmmod nvme_keyring 00:30:41.484 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:41.484 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:30:41.484 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:30:41.484 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2155011 ']' 00:30:41.484 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2155011 00:30:41.484 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2155011 ']' 00:30:41.484 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2155011 00:30:41.484 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:30:41.484 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:41.484 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2155011 00:30:41.484 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:41.484 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:41.484 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2155011' 00:30:41.484 killing process with pid 2155011 00:30:41.484 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2155011 00:30:41.484 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2155011 00:30:41.484 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:41.484 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:41.484 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:41.484 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:30:41.484 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:30:41.484 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:41.484 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:30:41.744 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:41.745 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:41.745 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.745 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:41.745 08:29:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.656 08:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:43.656 00:30:43.656 real 0m28.191s 00:30:43.656 user 1m2.925s 00:30:43.656 sys 0m7.742s 00:30:43.656 08:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:43.656 08:29:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:43.656 ************************************ 00:30:43.656 END TEST nvmf_bdevperf 00:30:43.656 ************************************ 00:30:43.656 08:29:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:43.656 08:29:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:43.656 08:29:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:43.656 08:29:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:43.656 ************************************ 00:30:43.656 START TEST nvmf_target_disconnect 00:30:43.656 ************************************ 00:30:43.656 08:29:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:43.919 * Looking for test storage... 00:30:43.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:43.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.919 --rc genhtml_branch_coverage=1 00:30:43.919 --rc genhtml_function_coverage=1 00:30:43.919 --rc genhtml_legend=1 00:30:43.919 --rc geninfo_all_blocks=1 00:30:43.919 --rc geninfo_unexecuted_blocks=1 00:30:43.919 00:30:43.919 ' 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:43.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.919 --rc genhtml_branch_coverage=1 00:30:43.919 --rc genhtml_function_coverage=1 00:30:43.919 --rc genhtml_legend=1 00:30:43.919 --rc geninfo_all_blocks=1 00:30:43.919 --rc geninfo_unexecuted_blocks=1 00:30:43.919 00:30:43.919 ' 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:43.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.919 --rc genhtml_branch_coverage=1 00:30:43.919 --rc genhtml_function_coverage=1 00:30:43.919 --rc genhtml_legend=1 00:30:43.919 --rc geninfo_all_blocks=1 00:30:43.919 --rc geninfo_unexecuted_blocks=1 00:30:43.919 00:30:43.919 ' 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:43.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.919 --rc genhtml_branch_coverage=1 00:30:43.919 --rc genhtml_function_coverage=1 00:30:43.919 --rc genhtml_legend=1 00:30:43.919 --rc geninfo_all_blocks=1 00:30:43.919 --rc geninfo_unexecuted_blocks=1 00:30:43.919 00:30:43.919 ' 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:43.919 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:43.920 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:30:43.920 08:29:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:52.065 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:52.065 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:30:52.065 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:52.065 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:52.065 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:52.065 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:52.065 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:52.065 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:30:52.065 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:52.065 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:30:52.065 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:30:52.065 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:52.066 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:52.066 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:52.066 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:52.066 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:52.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:52.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:30:52.066 00:30:52.066 --- 10.0.0.2 ping statistics --- 00:30:52.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:52.066 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:52.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:52.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:30:52.066 00:30:52.066 --- 10.0.0.1 ping statistics --- 00:30:52.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:52.066 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:52.066 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:52.067 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:52.067 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:52.067 ************************************ 00:30:52.067 START TEST nvmf_target_disconnect_tc1 00:30:52.067 ************************************ 00:30:52.067 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:30:52.067 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:52.067 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:30:52.067 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:52.067 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:52.067 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:52.067 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:52.067 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:52.067 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:52.067 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:52.067 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:52.067 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:30:52.067 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:52.067 [2024-11-28 08:29:48.916380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.067 [2024-11-28 08:29:48.916485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x174fae0 with addr=10.0.0.2, port=4420 00:30:52.067 [2024-11-28 08:29:48.916518] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:52.067 [2024-11-28 08:29:48.916530] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:52.067 [2024-11-28 08:29:48.916539] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:30:52.067 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:52.067 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:52.067 Initializing NVMe Controllers 00:30:52.067 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:30:52.067 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:52.067 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:52.067 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:52.067 00:30:52.067 real 0m0.155s 00:30:52.067 user 0m0.069s 00:30:52.067 sys 0m0.085s 00:30:52.067 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:52.067 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:52.067 ************************************ 00:30:52.067 END TEST nvmf_target_disconnect_tc1 00:30:52.067 ************************************ 00:30:52.067 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:52.067 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:52.067 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:52.067 08:29:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:52.067 ************************************ 00:30:52.067 START TEST nvmf_target_disconnect_tc2 00:30:52.067 ************************************ 00:30:52.067 08:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:30:52.067 08:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:30:52.067 08:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:52.067 08:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:52.067 08:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:52.067 08:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:52.067 08:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2161126 00:30:52.067 08:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2161126 00:30:52.067 08:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:52.067 08:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2161126 ']' 00:30:52.067 08:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:52.067 08:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:52.067 08:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:52.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:52.067 08:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:52.067 08:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:52.067 [2024-11-28 08:29:49.079424] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:30:52.067 [2024-11-28 08:29:49.079483] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:52.067 [2024-11-28 08:29:49.178928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:52.067 [2024-11-28 08:29:49.231695] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:52.067 [2024-11-28 08:29:49.231746] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:52.067 [2024-11-28 08:29:49.231755] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:52.067 [2024-11-28 08:29:49.231763] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:52.067 [2024-11-28 08:29:49.231769] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:52.067 [2024-11-28 08:29:49.233891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:52.067 [2024-11-28 08:29:49.234051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:52.067 [2024-11-28 08:29:49.234218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:52.067 [2024-11-28 08:29:49.234254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:52.639 08:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:52.639 08:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:52.639 08:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:52.639 08:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:52.639 08:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:52.900 08:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:52.901 08:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:52.901 08:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.901 08:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:52.901 Malloc0 00:30:52.901 08:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.901 08:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:52.901 08:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.901 08:29:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:52.901 [2024-11-28 08:29:50.000511] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:52.901 08:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.901 08:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:52.901 08:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.901 08:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:52.901 08:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.901 08:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:52.901 08:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.901 08:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:52.901 08:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.901 08:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:52.901 08:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.901 08:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:52.901 [2024-11-28 08:29:50.040931] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:52.901 08:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.901 08:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:52.901 08:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.901 08:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:52.901 08:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.901 08:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2161200 00:30:52.901 08:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:52.901 08:29:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:54.820 08:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2161126 00:30:54.820 08:29:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:54.820 Read completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Read completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Read completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Write completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Write completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Write completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Write completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Read completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Write completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Read completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Read completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Read completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Write completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Read completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Write completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Write completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Read completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Read completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Write completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Read completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Read completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Read completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Read completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Write completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Write completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Write completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Write completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Read completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Read completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Write completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Read completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Write completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 [2024-11-28 08:29:52.076222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:54.820 Read completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Read completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Read completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Read completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Read completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Read completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Read completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Read completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Read completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Write completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Read completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Write completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Write completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Write completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Write completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Read completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Read completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Write completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Read completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Write completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Write completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Write completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Read completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Read completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Write completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Read completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Read completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Write completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Read completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Read completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Write completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.820 Write completed with error (sct=0, sc=8) 00:30:54.820 starting I/O failed 00:30:54.821 [2024-11-28 08:29:52.076577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:54.821 [2024-11-28 08:29:52.076841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.821 [2024-11-28 08:29:52.076861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.821 qpair failed and we were unable to recover it. 00:30:54.821 [2024-11-28 08:29:52.077179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.821 [2024-11-28 08:29:52.077192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.821 qpair failed and we were unable to recover it. 00:30:54.821 [2024-11-28 08:29:52.077574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.821 [2024-11-28 08:29:52.077632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.821 qpair failed and we were unable to recover it. 00:30:54.821 [2024-11-28 08:29:52.077999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.821 [2024-11-28 08:29:52.078014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.821 qpair failed and we were unable to recover it. 00:30:54.821 [2024-11-28 08:29:52.078374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.821 [2024-11-28 08:29:52.078402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.821 qpair failed and we were unable to recover it. 00:30:54.821 [2024-11-28 08:29:52.078752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.821 [2024-11-28 08:29:52.078764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.821 qpair failed and we were unable to recover it. 00:30:54.821 [2024-11-28 08:29:52.078993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.821 [2024-11-28 08:29:52.079004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.821 qpair failed and we were unable to recover it. 00:30:54.821 [2024-11-28 08:29:52.079434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.821 [2024-11-28 08:29:52.079494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.821 qpair failed and we were unable to recover it. 00:30:54.821 [2024-11-28 08:29:52.079754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.821 [2024-11-28 08:29:52.079768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.821 qpair failed and we were unable to recover it. 00:30:54.821 [2024-11-28 08:29:52.079989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.821 [2024-11-28 08:29:52.080002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.821 qpair failed and we were unable to recover it. 00:30:54.821 [2024-11-28 08:29:52.080331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.821 [2024-11-28 08:29:52.080346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.821 qpair failed and we were unable to recover it. 00:30:54.821 [2024-11-28 08:29:52.081682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.821 [2024-11-28 08:29:52.081696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.821 qpair failed and we were unable to recover it. 00:30:54.821 [2024-11-28 08:29:52.081947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.821 [2024-11-28 08:29:52.081961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.821 qpair failed and we were unable to recover it. 00:30:54.821 [2024-11-28 08:29:52.082236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.821 [2024-11-28 08:29:52.082250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.821 qpair failed and we were unable to recover it. 00:30:54.821 [2024-11-28 08:29:52.082583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.821 [2024-11-28 08:29:52.082596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.821 qpair failed and we were unable to recover it. 00:30:54.821 [2024-11-28 08:29:52.082921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.821 [2024-11-28 08:29:52.082932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.821 qpair failed and we were unable to recover it. 00:30:54.821 [2024-11-28 08:29:52.083280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.821 [2024-11-28 08:29:52.083293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.821 qpair failed and we were unable to recover it. 00:30:54.821 [2024-11-28 08:29:52.083618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.821 [2024-11-28 08:29:52.083630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.821 qpair failed and we were unable to recover it. 00:30:54.821 [2024-11-28 08:29:52.083928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.821 [2024-11-28 08:29:52.083940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.821 qpair failed and we were unable to recover it. 00:30:54.821 [2024-11-28 08:29:52.084180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.821 [2024-11-28 08:29:52.084193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.821 qpair failed and we were unable to recover it. 00:30:54.821 [2024-11-28 08:29:52.084502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.821 [2024-11-28 08:29:52.084514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.821 qpair failed and we were unable to recover it. 00:30:54.821 [2024-11-28 08:29:52.084861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.821 [2024-11-28 08:29:52.084873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.821 qpair failed and we were unable to recover it. 00:30:54.821 [2024-11-28 08:29:52.085115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.821 [2024-11-28 08:29:52.085127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.821 qpair failed and we were unable to recover it. 00:30:54.821 [2024-11-28 08:29:52.085421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.821 [2024-11-28 08:29:52.085434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.821 qpair failed and we were unable to recover it. 00:30:54.821 [2024-11-28 08:29:52.085788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.821 [2024-11-28 08:29:52.085800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.821 qpair failed and we were unable to recover it. 00:30:54.821 [2024-11-28 08:29:52.086113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.821 [2024-11-28 08:29:52.086125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.821 qpair failed and we were unable to recover it. 00:30:54.821 [2024-11-28 08:29:52.086352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.821 [2024-11-28 08:29:52.086366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.821 qpair failed and we were unable to recover it. 00:30:54.821 [2024-11-28 08:29:52.086702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.821 [2024-11-28 08:29:52.086713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.821 qpair failed and we were unable to recover it. 00:30:54.821 [2024-11-28 08:29:52.086935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.821 [2024-11-28 08:29:52.086947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.821 qpair failed and we were unable to recover it. 00:30:54.821 [2024-11-28 08:29:52.087233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.821 [2024-11-28 08:29:52.087244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.822 qpair failed and we were unable to recover it. 00:30:54.822 [2024-11-28 08:29:52.087586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.822 [2024-11-28 08:29:52.087597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.822 qpair failed and we were unable to recover it. 00:30:54.822 [2024-11-28 08:29:52.087920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.822 [2024-11-28 08:29:52.087931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.822 qpair failed and we were unable to recover it. 00:30:54.822 [2024-11-28 08:29:52.088232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.822 [2024-11-28 08:29:52.088245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.822 qpair failed and we were unable to recover it. 00:30:54.822 [2024-11-28 08:29:52.088596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.822 [2024-11-28 08:29:52.088607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.822 qpair failed and we were unable to recover it. 00:30:54.822 [2024-11-28 08:29:52.088951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.822 [2024-11-28 08:29:52.089009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.822 qpair failed and we were unable to recover it. 00:30:54.822 [2024-11-28 08:29:52.089251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.822 [2024-11-28 08:29:52.089269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.822 qpair failed and we were unable to recover it. 00:30:54.822 [2024-11-28 08:29:52.089626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.822 [2024-11-28 08:29:52.089637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.822 qpair failed and we were unable to recover it. 00:30:54.822 [2024-11-28 08:29:52.089992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.822 [2024-11-28 08:29:52.090002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.822 qpair failed and we were unable to recover it. 00:30:54.822 [2024-11-28 08:29:52.090238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.822 [2024-11-28 08:29:52.090250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.822 qpair failed and we were unable to recover it. 00:30:54.822 [2024-11-28 08:29:52.090606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.822 [2024-11-28 08:29:52.090616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.822 qpair failed and we were unable to recover it. 00:30:54.822 [2024-11-28 08:29:52.090850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.822 [2024-11-28 08:29:52.090861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.822 qpair failed and we were unable to recover it. 00:30:54.822 [2024-11-28 08:29:52.091217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.822 [2024-11-28 08:29:52.091230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.822 qpair failed and we were unable to recover it. 00:30:54.822 [2024-11-28 08:29:52.091560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.822 [2024-11-28 08:29:52.091571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.822 qpair failed and we were unable to recover it. 00:30:54.822 [2024-11-28 08:29:52.091886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.822 [2024-11-28 08:29:52.091896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.822 qpair failed and we were unable to recover it. 00:30:54.822 [2024-11-28 08:29:52.092212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.822 [2024-11-28 08:29:52.092223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.822 qpair failed and we were unable to recover it. 00:30:54.822 [2024-11-28 08:29:52.092556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.822 [2024-11-28 08:29:52.092566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.822 qpair failed and we were unable to recover it. 00:30:54.822 [2024-11-28 08:29:52.092893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.822 [2024-11-28 08:29:52.092903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.822 qpair failed and we were unable to recover it. 00:30:54.822 [2024-11-28 08:29:52.093108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.822 [2024-11-28 08:29:52.093120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.822 qpair failed and we were unable to recover it. 00:30:54.822 [2024-11-28 08:29:52.093418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.822 [2024-11-28 08:29:52.093431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.822 qpair failed and we were unable to recover it. 00:30:54.822 [2024-11-28 08:29:52.093629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.822 [2024-11-28 08:29:52.093643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.822 qpair failed and we were unable to recover it. 00:30:54.822 [2024-11-28 08:29:52.093971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.822 [2024-11-28 08:29:52.093982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.822 qpair failed and we were unable to recover it. 00:30:54.822 [2024-11-28 08:29:52.094324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.822 [2024-11-28 08:29:52.094337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.822 qpair failed and we were unable to recover it. 00:30:54.822 [2024-11-28 08:29:52.094642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.822 [2024-11-28 08:29:52.094653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.822 qpair failed and we were unable to recover it. 00:30:54.822 [2024-11-28 08:29:52.095001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.822 [2024-11-28 08:29:52.095012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.822 qpair failed and we were unable to recover it. 00:30:54.822 [2024-11-28 08:29:52.095355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.822 [2024-11-28 08:29:52.095365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.822 qpair failed and we were unable to recover it. 00:30:54.822 [2024-11-28 08:29:52.095710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.822 [2024-11-28 08:29:52.095723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.822 qpair failed and we were unable to recover it. 00:30:54.822 [2024-11-28 08:29:52.095934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.822 [2024-11-28 08:29:52.095946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.822 qpair failed and we were unable to recover it. 00:30:54.822 [2024-11-28 08:29:52.096263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.822 [2024-11-28 08:29:52.096274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.822 qpair failed and we were unable to recover it. 00:30:54.822 [2024-11-28 08:29:52.096614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.822 [2024-11-28 08:29:52.096624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.822 qpair failed and we were unable to recover it. 00:30:54.823 [2024-11-28 08:29:52.096978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.823 [2024-11-28 08:29:52.096990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.823 qpair failed and we were unable to recover it. 00:30:54.823 [2024-11-28 08:29:52.097315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.823 [2024-11-28 08:29:52.097330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.823 qpair failed and we were unable to recover it. 00:30:54.823 [2024-11-28 08:29:52.097638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.823 [2024-11-28 08:29:52.097652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.823 qpair failed and we were unable to recover it. 00:30:54.823 [2024-11-28 08:29:52.097984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.823 [2024-11-28 08:29:52.098002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.823 qpair failed and we were unable to recover it. 00:30:54.823 [2024-11-28 08:29:52.098327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.823 [2024-11-28 08:29:52.098343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.823 qpair failed and we were unable to recover it. 00:30:54.823 [2024-11-28 08:29:52.099668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.823 [2024-11-28 08:29:52.099708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.823 qpair failed and we were unable to recover it. 00:30:54.823 [2024-11-28 08:29:52.100048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.823 [2024-11-28 08:29:52.100065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.823 qpair failed and we were unable to recover it. 00:30:54.823 [2024-11-28 08:29:52.100393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.823 [2024-11-28 08:29:52.100408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.823 qpair failed and we were unable to recover it. 00:30:54.823 [2024-11-28 08:29:52.100661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.823 [2024-11-28 08:29:52.100674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.823 qpair failed and we were unable to recover it. 00:30:54.823 [2024-11-28 08:29:52.101035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.823 [2024-11-28 08:29:52.101048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.823 qpair failed and we were unable to recover it. 00:30:54.823 [2024-11-28 08:29:52.101346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.823 [2024-11-28 08:29:52.101360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.823 qpair failed and we were unable to recover it. 00:30:54.823 [2024-11-28 08:29:52.101607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.823 [2024-11-28 08:29:52.101619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.823 qpair failed and we were unable to recover it. 00:30:54.823 [2024-11-28 08:29:52.101936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.823 [2024-11-28 08:29:52.101950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.823 qpair failed and we were unable to recover it. 00:30:54.823 [2024-11-28 08:29:52.102282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.823 [2024-11-28 08:29:52.102297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.823 qpair failed and we were unable to recover it. 00:30:54.823 [2024-11-28 08:29:52.102672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.823 [2024-11-28 08:29:52.102687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.823 qpair failed and we were unable to recover it. 00:30:54.823 [2024-11-28 08:29:52.102918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.823 [2024-11-28 08:29:52.102932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.823 qpair failed and we were unable to recover it. 00:30:54.823 [2024-11-28 08:29:52.103280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.823 [2024-11-28 08:29:52.103294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.823 qpair failed and we were unable to recover it. 00:30:54.823 [2024-11-28 08:29:52.103625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.823 [2024-11-28 08:29:52.103639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.823 qpair failed and we were unable to recover it. 00:30:54.823 [2024-11-28 08:29:52.103972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.823 [2024-11-28 08:29:52.103984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.823 qpair failed and we were unable to recover it. 00:30:54.823 [2024-11-28 08:29:52.104185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.823 [2024-11-28 08:29:52.104199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.823 qpair failed and we were unable to recover it. 00:30:54.823 [2024-11-28 08:29:52.104516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.823 [2024-11-28 08:29:52.104528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.823 qpair failed and we were unable to recover it. 00:30:54.823 [2024-11-28 08:29:52.104854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.823 [2024-11-28 08:29:52.104869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.823 qpair failed and we were unable to recover it. 00:30:54.823 [2024-11-28 08:29:52.105187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.823 [2024-11-28 08:29:52.105203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:54.823 qpair failed and we were unable to recover it. 00:30:55.100 [2024-11-28 08:29:52.106441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.100 [2024-11-28 08:29:52.106479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.100 qpair failed and we were unable to recover it. 00:30:55.100 [2024-11-28 08:29:52.106831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.100 [2024-11-28 08:29:52.106846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.100 qpair failed and we were unable to recover it. 00:30:55.100 [2024-11-28 08:29:52.107186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.100 [2024-11-28 08:29:52.107200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.100 qpair failed and we were unable to recover it. 00:30:55.100 [2024-11-28 08:29:52.107518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.100 [2024-11-28 08:29:52.107532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.100 qpair failed and we were unable to recover it. 00:30:55.100 [2024-11-28 08:29:52.107858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.100 [2024-11-28 08:29:52.107871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.100 qpair failed and we were unable to recover it. 00:30:55.100 [2024-11-28 08:29:52.108264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.100 [2024-11-28 08:29:52.108279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.100 qpair failed and we were unable to recover it. 00:30:55.100 [2024-11-28 08:29:52.108610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.100 [2024-11-28 08:29:52.108622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.100 qpair failed and we were unable to recover it. 00:30:55.100 [2024-11-28 08:29:52.108950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.100 [2024-11-28 08:29:52.108968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.100 qpair failed and we were unable to recover it. 00:30:55.100 [2024-11-28 08:29:52.109288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.100 [2024-11-28 08:29:52.109307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.100 qpair failed and we were unable to recover it. 00:30:55.100 [2024-11-28 08:29:52.109702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.100 [2024-11-28 08:29:52.109719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.100 qpair failed and we were unable to recover it. 00:30:55.100 [2024-11-28 08:29:52.110038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.100 [2024-11-28 08:29:52.110056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.100 qpair failed and we were unable to recover it. 00:30:55.100 [2024-11-28 08:29:52.110382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.100 [2024-11-28 08:29:52.110400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.100 qpair failed and we were unable to recover it. 00:30:55.100 [2024-11-28 08:29:52.110728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.100 [2024-11-28 08:29:52.110745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.100 qpair failed and we were unable to recover it. 00:30:55.100 [2024-11-28 08:29:52.111068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.100 [2024-11-28 08:29:52.111086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.100 qpair failed and we were unable to recover it. 00:30:55.100 [2024-11-28 08:29:52.111422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.100 [2024-11-28 08:29:52.111439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.100 qpair failed and we were unable to recover it. 00:30:55.100 [2024-11-28 08:29:52.111765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.100 [2024-11-28 08:29:52.111782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.100 qpair failed and we were unable to recover it. 00:30:55.100 [2024-11-28 08:29:52.112105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.100 [2024-11-28 08:29:52.112122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.100 qpair failed and we were unable to recover it. 00:30:55.100 [2024-11-28 08:29:52.112506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.100 [2024-11-28 08:29:52.112525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.100 qpair failed and we were unable to recover it. 00:30:55.100 [2024-11-28 08:29:52.112892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.100 [2024-11-28 08:29:52.112909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.100 qpair failed and we were unable to recover it. 00:30:55.100 [2024-11-28 08:29:52.113238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.100 [2024-11-28 08:29:52.113256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.100 qpair failed and we were unable to recover it. 00:30:55.100 [2024-11-28 08:29:52.113595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.100 [2024-11-28 08:29:52.113613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.100 qpair failed and we were unable to recover it. 00:30:55.100 [2024-11-28 08:29:52.113938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.100 [2024-11-28 08:29:52.113956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.100 qpair failed and we were unable to recover it. 00:30:55.100 [2024-11-28 08:29:52.114277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.100 [2024-11-28 08:29:52.114294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.100 qpair failed and we were unable to recover it. 00:30:55.100 [2024-11-28 08:29:52.114576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.101 [2024-11-28 08:29:52.114593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.101 qpair failed and we were unable to recover it. 00:30:55.101 [2024-11-28 08:29:52.114941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.101 [2024-11-28 08:29:52.114958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.101 qpair failed and we were unable to recover it. 00:30:55.101 [2024-11-28 08:29:52.115287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.101 [2024-11-28 08:29:52.115305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.101 qpair failed and we were unable to recover it. 00:30:55.101 [2024-11-28 08:29:52.115667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.101 [2024-11-28 08:29:52.115684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.101 qpair failed and we were unable to recover it. 00:30:55.101 [2024-11-28 08:29:52.116031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.101 [2024-11-28 08:29:52.116050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.101 qpair failed and we were unable to recover it. 00:30:55.101 [2024-11-28 08:29:52.116437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.101 [2024-11-28 08:29:52.116456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.101 qpair failed and we were unable to recover it. 00:30:55.101 [2024-11-28 08:29:52.116680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.101 [2024-11-28 08:29:52.116697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.101 qpair failed and we were unable to recover it. 00:30:55.101 [2024-11-28 08:29:52.117056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.101 [2024-11-28 08:29:52.117072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.101 qpair failed and we were unable to recover it. 00:30:55.101 [2024-11-28 08:29:52.117398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.101 [2024-11-28 08:29:52.117416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.101 qpair failed and we were unable to recover it. 00:30:55.101 [2024-11-28 08:29:52.117752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.101 [2024-11-28 08:29:52.117769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.101 qpair failed and we were unable to recover it. 00:30:55.101 [2024-11-28 08:29:52.118081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.101 [2024-11-28 08:29:52.118108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.101 qpair failed and we were unable to recover it. 00:30:55.101 [2024-11-28 08:29:52.118499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.101 [2024-11-28 08:29:52.118517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.101 qpair failed and we were unable to recover it. 00:30:55.101 [2024-11-28 08:29:52.118861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.101 [2024-11-28 08:29:52.118878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.101 qpair failed and we were unable to recover it. 00:30:55.101 [2024-11-28 08:29:52.119208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.101 [2024-11-28 08:29:52.119226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.101 qpair failed and we were unable to recover it. 00:30:55.101 [2024-11-28 08:29:52.119574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.101 [2024-11-28 08:29:52.119591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.101 qpair failed and we were unable to recover it. 00:30:55.101 [2024-11-28 08:29:52.119910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.101 [2024-11-28 08:29:52.119928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.101 qpair failed and we were unable to recover it. 00:30:55.101 [2024-11-28 08:29:52.120264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.101 [2024-11-28 08:29:52.120282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.101 qpair failed and we were unable to recover it. 00:30:55.101 [2024-11-28 08:29:52.120601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.101 [2024-11-28 08:29:52.120619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.101 qpair failed and we were unable to recover it. 00:30:55.101 [2024-11-28 08:29:52.120947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.101 [2024-11-28 08:29:52.120964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.101 qpair failed and we were unable to recover it. 00:30:55.101 [2024-11-28 08:29:52.121307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.101 [2024-11-28 08:29:52.121324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.101 qpair failed and we were unable to recover it. 00:30:55.101 [2024-11-28 08:29:52.121651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.101 [2024-11-28 08:29:52.121669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.101 qpair failed and we were unable to recover it. 00:30:55.101 [2024-11-28 08:29:52.121988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.101 [2024-11-28 08:29:52.122006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.101 qpair failed and we were unable to recover it. 00:30:55.101 [2024-11-28 08:29:52.122334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.101 [2024-11-28 08:29:52.122352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.101 qpair failed and we were unable to recover it. 00:30:55.101 [2024-11-28 08:29:52.122753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.101 [2024-11-28 08:29:52.122770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.101 qpair failed and we were unable to recover it. 00:30:55.101 [2024-11-28 08:29:52.123127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.101 [2024-11-28 08:29:52.123149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.101 qpair failed and we were unable to recover it. 00:30:55.101 [2024-11-28 08:29:52.123511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.101 [2024-11-28 08:29:52.123537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.101 qpair failed and we were unable to recover it. 00:30:55.101 [2024-11-28 08:29:52.123898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.101 [2024-11-28 08:29:52.123920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.101 qpair failed and we were unable to recover it. 00:30:55.101 [2024-11-28 08:29:52.124240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.101 [2024-11-28 08:29:52.124262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.101 qpair failed and we were unable to recover it. 00:30:55.101 [2024-11-28 08:29:52.124592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.101 [2024-11-28 08:29:52.124615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.101 qpair failed and we were unable to recover it. 00:30:55.101 [2024-11-28 08:29:52.124972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.101 [2024-11-28 08:29:52.124994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.101 qpair failed and we were unable to recover it. 00:30:55.101 [2024-11-28 08:29:52.125335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.102 [2024-11-28 08:29:52.125357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.102 qpair failed and we were unable to recover it. 00:30:55.102 [2024-11-28 08:29:52.125686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.102 [2024-11-28 08:29:52.125708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.102 qpair failed and we were unable to recover it. 00:30:55.102 [2024-11-28 08:29:52.126026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.102 [2024-11-28 08:29:52.126048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.102 qpair failed and we were unable to recover it. 00:30:55.102 [2024-11-28 08:29:52.126391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.102 [2024-11-28 08:29:52.126415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.102 qpair failed and we were unable to recover it. 00:30:55.102 [2024-11-28 08:29:52.126739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.102 [2024-11-28 08:29:52.126761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.102 qpair failed and we were unable to recover it. 00:30:55.102 [2024-11-28 08:29:52.127086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.102 [2024-11-28 08:29:52.127109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.102 qpair failed and we were unable to recover it. 00:30:55.102 [2024-11-28 08:29:52.127359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.102 [2024-11-28 08:29:52.127384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.102 qpair failed and we were unable to recover it. 00:30:55.102 [2024-11-28 08:29:52.127670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.102 [2024-11-28 08:29:52.127691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.102 qpair failed and we were unable to recover it. 00:30:55.102 [2024-11-28 08:29:52.128021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.102 [2024-11-28 08:29:52.128043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.102 qpair failed and we were unable to recover it. 00:30:55.102 [2024-11-28 08:29:52.128385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.102 [2024-11-28 08:29:52.128407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.102 qpair failed and we were unable to recover it. 00:30:55.102 [2024-11-28 08:29:52.128655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.102 [2024-11-28 08:29:52.128676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.102 qpair failed and we were unable to recover it. 00:30:55.102 [2024-11-28 08:29:52.128896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.102 [2024-11-28 08:29:52.128921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.102 qpair failed and we were unable to recover it. 00:30:55.102 [2024-11-28 08:29:52.129270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.102 [2024-11-28 08:29:52.129293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.102 qpair failed and we were unable to recover it. 00:30:55.102 [2024-11-28 08:29:52.129651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.102 [2024-11-28 08:29:52.129674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.102 qpair failed and we were unable to recover it. 00:30:55.102 [2024-11-28 08:29:52.129946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.102 [2024-11-28 08:29:52.129968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.102 qpair failed and we were unable to recover it. 00:30:55.102 [2024-11-28 08:29:52.130309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.102 [2024-11-28 08:29:52.130332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.102 qpair failed and we were unable to recover it. 00:30:55.102 [2024-11-28 08:29:52.130691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.102 [2024-11-28 08:29:52.130713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.102 qpair failed and we were unable to recover it. 00:30:55.102 [2024-11-28 08:29:52.131030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.102 [2024-11-28 08:29:52.131051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.102 qpair failed and we were unable to recover it. 00:30:55.102 [2024-11-28 08:29:52.131391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.102 [2024-11-28 08:29:52.131414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.102 qpair failed and we were unable to recover it. 00:30:55.102 [2024-11-28 08:29:52.131762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.102 [2024-11-28 08:29:52.131783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.102 qpair failed and we were unable to recover it. 00:30:55.102 [2024-11-28 08:29:52.132127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.102 [2024-11-28 08:29:52.132150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.102 qpair failed and we were unable to recover it. 00:30:55.102 [2024-11-28 08:29:52.132478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.102 [2024-11-28 08:29:52.132500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.102 qpair failed and we were unable to recover it. 00:30:55.102 [2024-11-28 08:29:52.132727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.102 [2024-11-28 08:29:52.132753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.102 qpair failed and we were unable to recover it. 00:30:55.102 [2024-11-28 08:29:52.133085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.102 [2024-11-28 08:29:52.133107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.102 qpair failed and we were unable to recover it. 00:30:55.102 [2024-11-28 08:29:52.133443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.102 [2024-11-28 08:29:52.133465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.102 qpair failed and we were unable to recover it. 00:30:55.102 [2024-11-28 08:29:52.133825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.102 [2024-11-28 08:29:52.133848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.102 qpair failed and we were unable to recover it. 00:30:55.102 [2024-11-28 08:29:52.134177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.102 [2024-11-28 08:29:52.134199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.102 qpair failed and we were unable to recover it. 00:30:55.102 [2024-11-28 08:29:52.134407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.102 [2024-11-28 08:29:52.134431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.102 qpair failed and we were unable to recover it. 00:30:55.102 [2024-11-28 08:29:52.134766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.102 [2024-11-28 08:29:52.134787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.102 qpair failed and we were unable to recover it. 00:30:55.102 [2024-11-28 08:29:52.135107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.103 [2024-11-28 08:29:52.135128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.103 qpair failed and we were unable to recover it. 00:30:55.103 [2024-11-28 08:29:52.135345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.103 [2024-11-28 08:29:52.135370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.103 qpair failed and we were unable to recover it. 00:30:55.103 [2024-11-28 08:29:52.135717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.103 [2024-11-28 08:29:52.135738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.103 qpair failed and we were unable to recover it. 00:30:55.103 [2024-11-28 08:29:52.136067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.103 [2024-11-28 08:29:52.136089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.103 qpair failed and we were unable to recover it. 00:30:55.103 [2024-11-28 08:29:52.136421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.103 [2024-11-28 08:29:52.136444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.103 qpair failed and we were unable to recover it. 00:30:55.103 [2024-11-28 08:29:52.136701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.103 [2024-11-28 08:29:52.136722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.103 qpair failed and we were unable to recover it. 00:30:55.103 [2024-11-28 08:29:52.137076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.103 [2024-11-28 08:29:52.137105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.103 qpair failed and we were unable to recover it. 00:30:55.103 [2024-11-28 08:29:52.137484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.103 [2024-11-28 08:29:52.137516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.103 qpair failed and we were unable to recover it. 00:30:55.103 [2024-11-28 08:29:52.137855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.103 [2024-11-28 08:29:52.137885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.103 qpair failed and we were unable to recover it. 00:30:55.103 [2024-11-28 08:29:52.138246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.103 [2024-11-28 08:29:52.138277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.103 qpair failed and we were unable to recover it. 00:30:55.103 [2024-11-28 08:29:52.138652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.103 [2024-11-28 08:29:52.138680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.103 qpair failed and we were unable to recover it. 00:30:55.103 [2024-11-28 08:29:52.139053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.103 [2024-11-28 08:29:52.139083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.103 qpair failed and we were unable to recover it. 00:30:55.103 [2024-11-28 08:29:52.139436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.103 [2024-11-28 08:29:52.139466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.103 qpair failed and we were unable to recover it. 00:30:55.103 [2024-11-28 08:29:52.139830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.103 [2024-11-28 08:29:52.139859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.103 qpair failed and we were unable to recover it. 00:30:55.103 [2024-11-28 08:29:52.140219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.103 [2024-11-28 08:29:52.140249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.103 qpair failed and we were unable to recover it. 00:30:55.103 [2024-11-28 08:29:52.140621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.103 [2024-11-28 08:29:52.140650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.103 qpair failed and we were unable to recover it. 00:30:55.103 [2024-11-28 08:29:52.141007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.103 [2024-11-28 08:29:52.141036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.103 qpair failed and we were unable to recover it. 00:30:55.103 [2024-11-28 08:29:52.141462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.103 [2024-11-28 08:29:52.141492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.103 qpair failed and we were unable to recover it. 00:30:55.103 [2024-11-28 08:29:52.141811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.103 [2024-11-28 08:29:52.141841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.103 qpair failed and we were unable to recover it. 00:30:55.103 [2024-11-28 08:29:52.142204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.103 [2024-11-28 08:29:52.142234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.103 qpair failed and we were unable to recover it. 00:30:55.103 [2024-11-28 08:29:52.142610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.103 [2024-11-28 08:29:52.142646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.103 qpair failed and we were unable to recover it. 00:30:55.103 [2024-11-28 08:29:52.143001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.103 [2024-11-28 08:29:52.143030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.103 qpair failed and we were unable to recover it. 00:30:55.103 [2024-11-28 08:29:52.143411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.103 [2024-11-28 08:29:52.143442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.103 qpair failed and we were unable to recover it. 00:30:55.103 [2024-11-28 08:29:52.143805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.103 [2024-11-28 08:29:52.143835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.103 qpair failed and we were unable to recover it. 00:30:55.103 [2024-11-28 08:29:52.144206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.103 [2024-11-28 08:29:52.144236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.103 qpair failed and we were unable to recover it. 00:30:55.103 [2024-11-28 08:29:52.144663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.103 [2024-11-28 08:29:52.144691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.103 qpair failed and we were unable to recover it. 00:30:55.103 [2024-11-28 08:29:52.145035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.103 [2024-11-28 08:29:52.145066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.103 qpair failed and we were unable to recover it. 00:30:55.103 [2024-11-28 08:29:52.145318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.103 [2024-11-28 08:29:52.145349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.103 qpair failed and we were unable to recover it. 00:30:55.103 [2024-11-28 08:29:52.145737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.103 [2024-11-28 08:29:52.145766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.103 qpair failed and we were unable to recover it. 00:30:55.103 [2024-11-28 08:29:52.146021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.103 [2024-11-28 08:29:52.146049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.103 qpair failed and we were unable to recover it. 00:30:55.103 [2024-11-28 08:29:52.146291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.104 [2024-11-28 08:29:52.146326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.104 qpair failed and we were unable to recover it. 00:30:55.104 [2024-11-28 08:29:52.146710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.104 [2024-11-28 08:29:52.146739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.104 qpair failed and we were unable to recover it. 00:30:55.104 [2024-11-28 08:29:52.147076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.104 [2024-11-28 08:29:52.147107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.104 qpair failed and we were unable to recover it. 00:30:55.104 [2024-11-28 08:29:52.147461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.104 [2024-11-28 08:29:52.147492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.104 qpair failed and we were unable to recover it. 00:30:55.104 [2024-11-28 08:29:52.147843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.104 [2024-11-28 08:29:52.147873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.104 qpair failed and we were unable to recover it. 00:30:55.104 [2024-11-28 08:29:52.148244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.104 [2024-11-28 08:29:52.148274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.104 qpair failed and we were unable to recover it. 00:30:55.104 [2024-11-28 08:29:52.148632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.104 [2024-11-28 08:29:52.148661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.104 qpair failed and we were unable to recover it. 00:30:55.104 [2024-11-28 08:29:52.148948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.104 [2024-11-28 08:29:52.148977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.104 qpair failed and we were unable to recover it. 00:30:55.104 [2024-11-28 08:29:52.149339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.104 [2024-11-28 08:29:52.149370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.104 qpair failed and we were unable to recover it. 00:30:55.104 [2024-11-28 08:29:52.149801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.104 [2024-11-28 08:29:52.149832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.104 qpair failed and we were unable to recover it. 00:30:55.104 [2024-11-28 08:29:52.150195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.104 [2024-11-28 08:29:52.150225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.104 qpair failed and we were unable to recover it. 00:30:55.104 [2024-11-28 08:29:52.150587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.104 [2024-11-28 08:29:52.150617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.104 qpair failed and we were unable to recover it. 00:30:55.104 [2024-11-28 08:29:52.150978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.104 [2024-11-28 08:29:52.151007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.104 qpair failed and we were unable to recover it. 00:30:55.104 [2024-11-28 08:29:52.151344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.104 [2024-11-28 08:29:52.151374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.104 qpair failed and we were unable to recover it. 00:30:55.104 [2024-11-28 08:29:52.151734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.104 [2024-11-28 08:29:52.151762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.104 qpair failed and we were unable to recover it. 00:30:55.104 [2024-11-28 08:29:52.152150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.104 [2024-11-28 08:29:52.152190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.104 qpair failed and we were unable to recover it. 00:30:55.104 [2024-11-28 08:29:52.152447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.104 [2024-11-28 08:29:52.152478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.104 qpair failed and we were unable to recover it. 00:30:55.104 [2024-11-28 08:29:52.152813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.104 [2024-11-28 08:29:52.152843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.104 qpair failed and we were unable to recover it. 00:30:55.104 [2024-11-28 08:29:52.153225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.104 [2024-11-28 08:29:52.153257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.104 qpair failed and we were unable to recover it. 00:30:55.104 [2024-11-28 08:29:52.153493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.104 [2024-11-28 08:29:52.153522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.104 qpair failed and we were unable to recover it. 00:30:55.104 [2024-11-28 08:29:52.153875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.104 [2024-11-28 08:29:52.153906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.104 qpair failed and we were unable to recover it. 00:30:55.104 [2024-11-28 08:29:52.154244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.104 [2024-11-28 08:29:52.154274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.104 qpair failed and we were unable to recover it. 00:30:55.104 [2024-11-28 08:29:52.154650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.104 [2024-11-28 08:29:52.154679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.104 qpair failed and we were unable to recover it. 00:30:55.104 [2024-11-28 08:29:52.155037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.104 [2024-11-28 08:29:52.155065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.104 qpair failed and we were unable to recover it. 00:30:55.104 [2024-11-28 08:29:52.155425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.104 [2024-11-28 08:29:52.155454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.104 qpair failed and we were unable to recover it. 00:30:55.104 [2024-11-28 08:29:52.155849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.104 [2024-11-28 08:29:52.155878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.104 qpair failed and we were unable to recover it. 00:30:55.104 [2024-11-28 08:29:52.156248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.104 [2024-11-28 08:29:52.156278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.104 qpair failed and we were unable to recover it. 00:30:55.104 [2024-11-28 08:29:52.156640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.104 [2024-11-28 08:29:52.156670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.104 qpair failed and we were unable to recover it. 00:30:55.104 [2024-11-28 08:29:52.157029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.104 [2024-11-28 08:29:52.157058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.104 qpair failed and we were unable to recover it. 00:30:55.104 [2024-11-28 08:29:52.157424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.104 [2024-11-28 08:29:52.157454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.104 qpair failed and we were unable to recover it. 00:30:55.104 [2024-11-28 08:29:52.157875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.104 [2024-11-28 08:29:52.157904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.105 qpair failed and we were unable to recover it. 00:30:55.105 [2024-11-28 08:29:52.158258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.105 [2024-11-28 08:29:52.158290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.105 qpair failed and we were unable to recover it. 00:30:55.105 [2024-11-28 08:29:52.158669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.105 [2024-11-28 08:29:52.158700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.105 qpair failed and we were unable to recover it. 00:30:55.105 [2024-11-28 08:29:52.159048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.105 [2024-11-28 08:29:52.159083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.105 qpair failed and we were unable to recover it. 00:30:55.105 [2024-11-28 08:29:52.159442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.105 [2024-11-28 08:29:52.159473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.105 qpair failed and we were unable to recover it. 00:30:55.105 [2024-11-28 08:29:52.159823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.105 [2024-11-28 08:29:52.159852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.105 qpair failed and we were unable to recover it. 00:30:55.105 [2024-11-28 08:29:52.160217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.105 [2024-11-28 08:29:52.160247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.105 qpair failed and we were unable to recover it. 00:30:55.105 [2024-11-28 08:29:52.160607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.105 [2024-11-28 08:29:52.160636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.105 qpair failed and we were unable to recover it. 00:30:55.105 [2024-11-28 08:29:52.160966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.105 [2024-11-28 08:29:52.160997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.105 qpair failed and we were unable to recover it. 00:30:55.105 [2024-11-28 08:29:52.161344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.105 [2024-11-28 08:29:52.161374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.105 qpair failed and we were unable to recover it. 00:30:55.105 [2024-11-28 08:29:52.161737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.105 [2024-11-28 08:29:52.161767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.105 qpair failed and we were unable to recover it. 00:30:55.105 [2024-11-28 08:29:52.162124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.105 [2024-11-28 08:29:52.162155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.105 qpair failed and we were unable to recover it. 00:30:55.105 [2024-11-28 08:29:52.162538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.105 [2024-11-28 08:29:52.162568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.105 qpair failed and we were unable to recover it. 00:30:55.105 [2024-11-28 08:29:52.162932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.105 [2024-11-28 08:29:52.162961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.105 qpair failed and we were unable to recover it. 00:30:55.105 [2024-11-28 08:29:52.163354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.105 [2024-11-28 08:29:52.163385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.105 qpair failed and we were unable to recover it. 00:30:55.105 [2024-11-28 08:29:52.163763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.105 [2024-11-28 08:29:52.163792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.105 qpair failed and we were unable to recover it. 00:30:55.105 [2024-11-28 08:29:52.164172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.105 [2024-11-28 08:29:52.164202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.105 qpair failed and we were unable to recover it. 00:30:55.105 [2024-11-28 08:29:52.164597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.105 [2024-11-28 08:29:52.164626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.105 qpair failed and we were unable to recover it. 00:30:55.105 [2024-11-28 08:29:52.164990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.105 [2024-11-28 08:29:52.165020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.105 qpair failed and we were unable to recover it. 00:30:55.105 [2024-11-28 08:29:52.165391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.105 [2024-11-28 08:29:52.165421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.105 qpair failed and we were unable to recover it. 00:30:55.105 [2024-11-28 08:29:52.165782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.105 [2024-11-28 08:29:52.165811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.105 qpair failed and we were unable to recover it. 00:30:55.105 [2024-11-28 08:29:52.166178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.105 [2024-11-28 08:29:52.166207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.105 qpair failed and we were unable to recover it. 00:30:55.105 [2024-11-28 08:29:52.166559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.105 [2024-11-28 08:29:52.166588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.105 qpair failed and we were unable to recover it. 00:30:55.105 [2024-11-28 08:29:52.166948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.105 [2024-11-28 08:29:52.166978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.105 qpair failed and we were unable to recover it. 00:30:55.105 [2024-11-28 08:29:52.167355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.105 [2024-11-28 08:29:52.167386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.105 qpair failed and we were unable to recover it. 00:30:55.105 [2024-11-28 08:29:52.167760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.105 [2024-11-28 08:29:52.167791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.105 qpair failed and we were unable to recover it. 00:30:55.105 [2024-11-28 08:29:52.168199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.105 [2024-11-28 08:29:52.168230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.105 qpair failed and we were unable to recover it. 00:30:55.105 [2024-11-28 08:29:52.168588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.105 [2024-11-28 08:29:52.168617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.105 qpair failed and we were unable to recover it. 00:30:55.105 [2024-11-28 08:29:52.168978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.105 [2024-11-28 08:29:52.169012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.105 qpair failed and we were unable to recover it. 00:30:55.106 [2024-11-28 08:29:52.169380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.106 [2024-11-28 08:29:52.169410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.106 qpair failed and we were unable to recover it. 00:30:55.106 [2024-11-28 08:29:52.169768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.106 [2024-11-28 08:29:52.169798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.106 qpair failed and we were unable to recover it. 00:30:55.106 [2024-11-28 08:29:52.170167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.106 [2024-11-28 08:29:52.170198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.106 qpair failed and we were unable to recover it. 00:30:55.106 [2024-11-28 08:29:52.170564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.106 [2024-11-28 08:29:52.170597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.106 qpair failed and we were unable to recover it. 00:30:55.106 [2024-11-28 08:29:52.170961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.106 [2024-11-28 08:29:52.170990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.106 qpair failed and we were unable to recover it. 00:30:55.106 [2024-11-28 08:29:52.171351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.106 [2024-11-28 08:29:52.171383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.106 qpair failed and we were unable to recover it. 00:30:55.106 [2024-11-28 08:29:52.171731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.106 [2024-11-28 08:29:52.171760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.106 qpair failed and we were unable to recover it. 00:30:55.106 [2024-11-28 08:29:52.172137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.106 [2024-11-28 08:29:52.172175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.106 qpair failed and we were unable to recover it. 00:30:55.106 [2024-11-28 08:29:52.172545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.106 [2024-11-28 08:29:52.172574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.106 qpair failed and we were unable to recover it. 00:30:55.106 [2024-11-28 08:29:52.172925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.106 [2024-11-28 08:29:52.172954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.106 qpair failed and we were unable to recover it. 00:30:55.106 [2024-11-28 08:29:52.173263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.106 [2024-11-28 08:29:52.173293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.106 qpair failed and we were unable to recover it. 00:30:55.106 [2024-11-28 08:29:52.173638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.106 [2024-11-28 08:29:52.173668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.106 qpair failed and we were unable to recover it. 00:30:55.106 [2024-11-28 08:29:52.174032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.106 [2024-11-28 08:29:52.174062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.106 qpair failed and we were unable to recover it. 00:30:55.106 [2024-11-28 08:29:52.174434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.106 [2024-11-28 08:29:52.174465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.106 qpair failed and we were unable to recover it. 00:30:55.106 [2024-11-28 08:29:52.174817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.106 [2024-11-28 08:29:52.174846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.106 qpair failed and we were unable to recover it. 00:30:55.106 [2024-11-28 08:29:52.175251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.106 [2024-11-28 08:29:52.175283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.106 qpair failed and we were unable to recover it. 00:30:55.106 [2024-11-28 08:29:52.175638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.106 [2024-11-28 08:29:52.175667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.106 qpair failed and we were unable to recover it. 00:30:55.106 [2024-11-28 08:29:52.176034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.106 [2024-11-28 08:29:52.176063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.106 qpair failed and we were unable to recover it. 00:30:55.106 [2024-11-28 08:29:52.176429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.106 [2024-11-28 08:29:52.176460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.106 qpair failed and we were unable to recover it. 00:30:55.106 [2024-11-28 08:29:52.176819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.106 [2024-11-28 08:29:52.176849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.106 qpair failed and we were unable to recover it. 00:30:55.106 [2024-11-28 08:29:52.177215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.106 [2024-11-28 08:29:52.177246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.106 qpair failed and we were unable to recover it. 00:30:55.106 [2024-11-28 08:29:52.177600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.106 [2024-11-28 08:29:52.177632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.106 qpair failed and we were unable to recover it. 00:30:55.106 [2024-11-28 08:29:52.177978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.106 [2024-11-28 08:29:52.178007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.107 qpair failed and we were unable to recover it. 00:30:55.107 [2024-11-28 08:29:52.178356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.107 [2024-11-28 08:29:52.178388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.107 qpair failed and we were unable to recover it. 00:30:55.107 [2024-11-28 08:29:52.178731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.107 [2024-11-28 08:29:52.178761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.107 qpair failed and we were unable to recover it. 00:30:55.107 [2024-11-28 08:29:52.179097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.107 [2024-11-28 08:29:52.179132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.107 qpair failed and we were unable to recover it. 00:30:55.107 [2024-11-28 08:29:52.179516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.107 [2024-11-28 08:29:52.179558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.107 qpair failed and we were unable to recover it. 00:30:55.107 [2024-11-28 08:29:52.179978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.107 [2024-11-28 08:29:52.180007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.107 qpair failed and we were unable to recover it. 00:30:55.107 [2024-11-28 08:29:52.180248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.107 [2024-11-28 08:29:52.180277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.107 qpair failed and we were unable to recover it. 00:30:55.107 [2024-11-28 08:29:52.180651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.107 [2024-11-28 08:29:52.180680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.107 qpair failed and we were unable to recover it. 00:30:55.107 [2024-11-28 08:29:52.181043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.107 [2024-11-28 08:29:52.181072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.107 qpair failed and we were unable to recover it. 00:30:55.107 [2024-11-28 08:29:52.181337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.107 [2024-11-28 08:29:52.181367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.107 qpair failed and we were unable to recover it. 00:30:55.107 [2024-11-28 08:29:52.181750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.107 [2024-11-28 08:29:52.181779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.107 qpair failed and we were unable to recover it. 00:30:55.107 [2024-11-28 08:29:52.182141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.107 [2024-11-28 08:29:52.182179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.107 qpair failed and we were unable to recover it. 00:30:55.107 [2024-11-28 08:29:52.182526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.107 [2024-11-28 08:29:52.182557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.107 qpair failed and we were unable to recover it. 00:30:55.107 [2024-11-28 08:29:52.182913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.107 [2024-11-28 08:29:52.182943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.107 qpair failed and we were unable to recover it. 00:30:55.107 [2024-11-28 08:29:52.183302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.107 [2024-11-28 08:29:52.183333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.107 qpair failed and we were unable to recover it. 00:30:55.107 [2024-11-28 08:29:52.183596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.107 [2024-11-28 08:29:52.183625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.107 qpair failed and we were unable to recover it. 00:30:55.107 [2024-11-28 08:29:52.184004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.107 [2024-11-28 08:29:52.184034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.107 qpair failed and we were unable to recover it. 00:30:55.107 [2024-11-28 08:29:52.184393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.107 [2024-11-28 08:29:52.184423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.107 qpair failed and we were unable to recover it. 00:30:55.107 [2024-11-28 08:29:52.184799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.107 [2024-11-28 08:29:52.184828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.107 qpair failed and we were unable to recover it. 00:30:55.107 [2024-11-28 08:29:52.185198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.107 [2024-11-28 08:29:52.185228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.107 qpair failed and we were unable to recover it. 00:30:55.107 [2024-11-28 08:29:52.185617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.107 [2024-11-28 08:29:52.185646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.107 qpair failed and we were unable to recover it. 00:30:55.107 [2024-11-28 08:29:52.186083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.107 [2024-11-28 08:29:52.186112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.107 qpair failed and we were unable to recover it. 00:30:55.107 [2024-11-28 08:29:52.186473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.107 [2024-11-28 08:29:52.186503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.107 qpair failed and we were unable to recover it. 00:30:55.107 [2024-11-28 08:29:52.186861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.107 [2024-11-28 08:29:52.186890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.107 qpair failed and we were unable to recover it. 00:30:55.107 [2024-11-28 08:29:52.187244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.107 [2024-11-28 08:29:52.187274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.107 qpair failed and we were unable to recover it. 00:30:55.107 [2024-11-28 08:29:52.187554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.107 [2024-11-28 08:29:52.187582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.107 qpair failed and we were unable to recover it. 00:30:55.107 [2024-11-28 08:29:52.187935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.107 [2024-11-28 08:29:52.187964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.107 qpair failed and we were unable to recover it. 00:30:55.107 [2024-11-28 08:29:52.188324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.107 [2024-11-28 08:29:52.188356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.107 qpair failed and we were unable to recover it. 00:30:55.107 [2024-11-28 08:29:52.188725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.107 [2024-11-28 08:29:52.188753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.107 qpair failed and we were unable to recover it. 00:30:55.107 [2024-11-28 08:29:52.189005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.107 [2024-11-28 08:29:52.189038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.107 qpair failed and we were unable to recover it. 00:30:55.107 [2024-11-28 08:29:52.189410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.108 [2024-11-28 08:29:52.189440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.108 qpair failed and we were unable to recover it. 00:30:55.108 [2024-11-28 08:29:52.189687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.108 [2024-11-28 08:29:52.189718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.108 qpair failed and we were unable to recover it. 00:30:55.108 [2024-11-28 08:29:52.190085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.108 [2024-11-28 08:29:52.190114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.108 qpair failed and we were unable to recover it. 00:30:55.108 [2024-11-28 08:29:52.190447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.108 [2024-11-28 08:29:52.190479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.108 qpair failed and we were unable to recover it. 00:30:55.108 [2024-11-28 08:29:52.190829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.108 [2024-11-28 08:29:52.190858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.108 qpair failed and we were unable to recover it. 00:30:55.108 [2024-11-28 08:29:52.191193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.108 [2024-11-28 08:29:52.191225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.108 qpair failed and we were unable to recover it. 00:30:55.108 [2024-11-28 08:29:52.191573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.108 [2024-11-28 08:29:52.191604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.108 qpair failed and we were unable to recover it. 00:30:55.108 [2024-11-28 08:29:52.191969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.108 [2024-11-28 08:29:52.191998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.108 qpair failed and we were unable to recover it. 00:30:55.108 [2024-11-28 08:29:52.192341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.108 [2024-11-28 08:29:52.192372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.108 qpair failed and we were unable to recover it. 00:30:55.108 [2024-11-28 08:29:52.192749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.108 [2024-11-28 08:29:52.192778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.108 qpair failed and we were unable to recover it. 00:30:55.108 [2024-11-28 08:29:52.193157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.108 [2024-11-28 08:29:52.193215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.108 qpair failed and we were unable to recover it. 00:30:55.108 [2024-11-28 08:29:52.193501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.108 [2024-11-28 08:29:52.193530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.108 qpair failed and we were unable to recover it. 00:30:55.108 [2024-11-28 08:29:52.193903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.108 [2024-11-28 08:29:52.193933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.108 qpair failed and we were unable to recover it. 00:30:55.108 [2024-11-28 08:29:52.194304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.108 [2024-11-28 08:29:52.194334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.108 qpair failed and we were unable to recover it. 00:30:55.108 [2024-11-28 08:29:52.194597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.108 [2024-11-28 08:29:52.194627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.108 qpair failed and we were unable to recover it. 00:30:55.108 [2024-11-28 08:29:52.194991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.108 [2024-11-28 08:29:52.195020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.108 qpair failed and we were unable to recover it. 00:30:55.108 [2024-11-28 08:29:52.195392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.108 [2024-11-28 08:29:52.195422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.108 qpair failed and we were unable to recover it. 00:30:55.108 [2024-11-28 08:29:52.195685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.108 [2024-11-28 08:29:52.195714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.108 qpair failed and we were unable to recover it. 00:30:55.108 [2024-11-28 08:29:52.196093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.108 [2024-11-28 08:29:52.196122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.108 qpair failed and we were unable to recover it. 00:30:55.108 [2024-11-28 08:29:52.196485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.108 [2024-11-28 08:29:52.196515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.108 qpair failed and we were unable to recover it. 00:30:55.108 [2024-11-28 08:29:52.196886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.108 [2024-11-28 08:29:52.196914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.108 qpair failed and we were unable to recover it. 00:30:55.108 [2024-11-28 08:29:52.197272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.108 [2024-11-28 08:29:52.197303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.108 qpair failed and we were unable to recover it. 00:30:55.108 [2024-11-28 08:29:52.197661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.108 [2024-11-28 08:29:52.197690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.108 qpair failed and we were unable to recover it. 00:30:55.108 [2024-11-28 08:29:52.198055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.108 [2024-11-28 08:29:52.198084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.108 qpair failed and we were unable to recover it. 00:30:55.108 [2024-11-28 08:29:52.198449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.108 [2024-11-28 08:29:52.198479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.108 qpair failed and we were unable to recover it. 00:30:55.108 [2024-11-28 08:29:52.198821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.108 [2024-11-28 08:29:52.198851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.108 qpair failed and we were unable to recover it. 00:30:55.108 [2024-11-28 08:29:52.199229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.108 [2024-11-28 08:29:52.199258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.108 qpair failed and we were unable to recover it. 00:30:55.108 [2024-11-28 08:29:52.199633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.108 [2024-11-28 08:29:52.199661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.108 qpair failed and we were unable to recover it. 00:30:55.108 [2024-11-28 08:29:52.200022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.108 [2024-11-28 08:29:52.200050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.108 qpair failed and we were unable to recover it. 00:30:55.108 [2024-11-28 08:29:52.200457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.108 [2024-11-28 08:29:52.200487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.108 qpair failed and we were unable to recover it. 00:30:55.108 [2024-11-28 08:29:52.200845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.109 [2024-11-28 08:29:52.200875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.109 qpair failed and we were unable to recover it. 00:30:55.109 [2024-11-28 08:29:52.201238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.109 [2024-11-28 08:29:52.201268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.109 qpair failed and we were unable to recover it. 00:30:55.109 [2024-11-28 08:29:52.201564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.109 [2024-11-28 08:29:52.201592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.109 qpair failed and we were unable to recover it. 00:30:55.109 [2024-11-28 08:29:52.201976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.109 [2024-11-28 08:29:52.202005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.109 qpair failed and we were unable to recover it. 00:30:55.109 [2024-11-28 08:29:52.202359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.109 [2024-11-28 08:29:52.202390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.109 qpair failed and we were unable to recover it. 00:30:55.109 [2024-11-28 08:29:52.202751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.109 [2024-11-28 08:29:52.202779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.109 qpair failed and we were unable to recover it. 00:30:55.109 [2024-11-28 08:29:52.203139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.109 [2024-11-28 08:29:52.203182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.109 qpair failed and we were unable to recover it. 00:30:55.109 [2024-11-28 08:29:52.203534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.109 [2024-11-28 08:29:52.203563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.109 qpair failed and we were unable to recover it. 00:30:55.109 [2024-11-28 08:29:52.203930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.109 [2024-11-28 08:29:52.203958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.109 qpair failed and we were unable to recover it. 00:30:55.109 [2024-11-28 08:29:52.204336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.109 [2024-11-28 08:29:52.204366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.109 qpair failed and we were unable to recover it. 00:30:55.109 [2024-11-28 08:29:52.204724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.109 [2024-11-28 08:29:52.204752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.109 qpair failed and we were unable to recover it. 00:30:55.109 [2024-11-28 08:29:52.205100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.109 [2024-11-28 08:29:52.205129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.109 qpair failed and we were unable to recover it. 00:30:55.109 [2024-11-28 08:29:52.205520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.109 [2024-11-28 08:29:52.205556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.109 qpair failed and we were unable to recover it. 00:30:55.109 [2024-11-28 08:29:52.205815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.109 [2024-11-28 08:29:52.205843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.109 qpair failed and we were unable to recover it. 00:30:55.109 [2024-11-28 08:29:52.206201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.109 [2024-11-28 08:29:52.206232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.109 qpair failed and we were unable to recover it. 00:30:55.109 [2024-11-28 08:29:52.206617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.109 [2024-11-28 08:29:52.206645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.109 qpair failed and we were unable to recover it. 00:30:55.109 [2024-11-28 08:29:52.207002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.109 [2024-11-28 08:29:52.207030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.109 qpair failed and we were unable to recover it. 00:30:55.109 [2024-11-28 08:29:52.207407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.109 [2024-11-28 08:29:52.207439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.109 qpair failed and we were unable to recover it. 00:30:55.109 [2024-11-28 08:29:52.207800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.109 [2024-11-28 08:29:52.207829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.109 qpair failed and we were unable to recover it. 00:30:55.109 [2024-11-28 08:29:52.208180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.109 [2024-11-28 08:29:52.208211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.109 qpair failed and we were unable to recover it. 00:30:55.109 [2024-11-28 08:29:52.208568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.109 [2024-11-28 08:29:52.208598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.109 qpair failed and we were unable to recover it. 00:30:55.109 [2024-11-28 08:29:52.208854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.109 [2024-11-28 08:29:52.208883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.109 qpair failed and we were unable to recover it. 00:30:55.109 [2024-11-28 08:29:52.209238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.109 [2024-11-28 08:29:52.209268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.109 qpair failed and we were unable to recover it. 00:30:55.109 [2024-11-28 08:29:52.209629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.109 [2024-11-28 08:29:52.209659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.109 qpair failed and we were unable to recover it. 00:30:55.109 [2024-11-28 08:29:52.209914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.109 [2024-11-28 08:29:52.209946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.109 qpair failed and we were unable to recover it. 00:30:55.109 [2024-11-28 08:29:52.210301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.109 [2024-11-28 08:29:52.210332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.109 qpair failed and we were unable to recover it. 00:30:55.109 [2024-11-28 08:29:52.210695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.109 [2024-11-28 08:29:52.210726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.109 qpair failed and we were unable to recover it. 00:30:55.109 [2024-11-28 08:29:52.211092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.109 [2024-11-28 08:29:52.211120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.109 qpair failed and we were unable to recover it. 00:30:55.109 [2024-11-28 08:29:52.211482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.109 [2024-11-28 08:29:52.211511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.109 qpair failed and we were unable to recover it. 00:30:55.109 [2024-11-28 08:29:52.211862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.109 [2024-11-28 08:29:52.211890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.110 qpair failed and we were unable to recover it. 00:30:55.110 [2024-11-28 08:29:52.212145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.110 [2024-11-28 08:29:52.212193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.110 qpair failed and we were unable to recover it. 00:30:55.110 [2024-11-28 08:29:52.212572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.110 [2024-11-28 08:29:52.212601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.110 qpair failed and we were unable to recover it. 00:30:55.110 [2024-11-28 08:29:52.212966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.110 [2024-11-28 08:29:52.212996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.110 qpair failed and we were unable to recover it. 00:30:55.110 [2024-11-28 08:29:52.213342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.110 [2024-11-28 08:29:52.213372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.110 qpair failed and we were unable to recover it. 00:30:55.110 [2024-11-28 08:29:52.213735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.110 [2024-11-28 08:29:52.213765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.110 qpair failed and we were unable to recover it. 00:30:55.110 [2024-11-28 08:29:52.214129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.110 [2024-11-28 08:29:52.214157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.110 qpair failed and we were unable to recover it. 00:30:55.110 [2024-11-28 08:29:52.214520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.110 [2024-11-28 08:29:52.214549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.110 qpair failed and we were unable to recover it. 00:30:55.110 [2024-11-28 08:29:52.214903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.110 [2024-11-28 08:29:52.214933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.110 qpair failed and we were unable to recover it. 00:30:55.110 [2024-11-28 08:29:52.216827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.110 [2024-11-28 08:29:52.216890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.110 qpair failed and we were unable to recover it. 00:30:55.110 [2024-11-28 08:29:52.217319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.110 [2024-11-28 08:29:52.217364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.110 qpair failed and we were unable to recover it. 00:30:55.110 [2024-11-28 08:29:52.217737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.110 [2024-11-28 08:29:52.217766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.110 qpair failed and we were unable to recover it. 00:30:55.110 [2024-11-28 08:29:52.218145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.110 [2024-11-28 08:29:52.218182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.110 qpair failed and we were unable to recover it. 00:30:55.110 [2024-11-28 08:29:52.218534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.110 [2024-11-28 08:29:52.218563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.110 qpair failed and we were unable to recover it. 00:30:55.110 [2024-11-28 08:29:52.218948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.110 [2024-11-28 08:29:52.218977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.110 qpair failed and we were unable to recover it. 00:30:55.110 [2024-11-28 08:29:52.219338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.110 [2024-11-28 08:29:52.219370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.110 qpair failed and we were unable to recover it. 00:30:55.110 [2024-11-28 08:29:52.219740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.110 [2024-11-28 08:29:52.219771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.110 qpair failed and we were unable to recover it. 00:30:55.110 [2024-11-28 08:29:52.220107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.110 [2024-11-28 08:29:52.220135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.110 qpair failed and we were unable to recover it. 00:30:55.110 [2024-11-28 08:29:52.221821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.110 [2024-11-28 08:29:52.221882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.110 qpair failed and we were unable to recover it. 00:30:55.110 [2024-11-28 08:29:52.222316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.110 [2024-11-28 08:29:52.222353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.110 qpair failed and we were unable to recover it. 00:30:55.110 [2024-11-28 08:29:52.222718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.110 [2024-11-28 08:29:52.222747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.110 qpair failed and we were unable to recover it. 00:30:55.110 [2024-11-28 08:29:52.223114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.110 [2024-11-28 08:29:52.223143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.110 qpair failed and we were unable to recover it. 00:30:55.110 [2024-11-28 08:29:52.223394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.110 [2024-11-28 08:29:52.223423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.110 qpair failed and we were unable to recover it. 00:30:55.110 [2024-11-28 08:29:52.223779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.110 [2024-11-28 08:29:52.223808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.110 qpair failed and we were unable to recover it. 00:30:55.110 [2024-11-28 08:29:52.224181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.110 [2024-11-28 08:29:52.224212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.110 qpair failed and we were unable to recover it. 00:30:55.110 [2024-11-28 08:29:52.224641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.110 [2024-11-28 08:29:52.224672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.110 qpair failed and we were unable to recover it. 00:30:55.110 [2024-11-28 08:29:52.225024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.110 [2024-11-28 08:29:52.225053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.110 qpair failed and we were unable to recover it. 00:30:55.110 [2024-11-28 08:29:52.225424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.110 [2024-11-28 08:29:52.225457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.110 qpair failed and we were unable to recover it. 00:30:55.110 [2024-11-28 08:29:52.225794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.110 [2024-11-28 08:29:52.225823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.110 qpair failed and we were unable to recover it. 00:30:55.110 [2024-11-28 08:29:52.226186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.110 [2024-11-28 08:29:52.226217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.110 qpair failed and we were unable to recover it. 00:30:55.110 [2024-11-28 08:29:52.226561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.110 [2024-11-28 08:29:52.226591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.110 qpair failed and we were unable to recover it. 00:30:55.110 [2024-11-28 08:29:52.226953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.110 [2024-11-28 08:29:52.226983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.110 qpair failed and we were unable to recover it. 00:30:55.111 [2024-11-28 08:29:52.227346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.111 [2024-11-28 08:29:52.227376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.111 qpair failed and we were unable to recover it. 00:30:55.111 [2024-11-28 08:29:52.227674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.111 [2024-11-28 08:29:52.227703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.111 qpair failed and we were unable to recover it. 00:30:55.111 [2024-11-28 08:29:52.228127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.111 [2024-11-28 08:29:52.228156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.111 qpair failed and we were unable to recover it. 00:30:55.111 [2024-11-28 08:29:52.228520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.111 [2024-11-28 08:29:52.228551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.111 qpair failed and we were unable to recover it. 00:30:55.111 [2024-11-28 08:29:52.228916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.111 [2024-11-28 08:29:52.228944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.111 qpair failed and we were unable to recover it. 00:30:55.111 [2024-11-28 08:29:52.229309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.111 [2024-11-28 08:29:52.229346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.111 qpair failed and we were unable to recover it. 00:30:55.111 [2024-11-28 08:29:52.229723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.111 [2024-11-28 08:29:52.229754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.111 qpair failed and we were unable to recover it. 00:30:55.111 [2024-11-28 08:29:52.230117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.111 [2024-11-28 08:29:52.230146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.111 qpair failed and we were unable to recover it. 00:30:55.111 [2024-11-28 08:29:52.230529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.111 [2024-11-28 08:29:52.230560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.111 qpair failed and we were unable to recover it. 00:30:55.111 [2024-11-28 08:29:52.230918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.111 [2024-11-28 08:29:52.230946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.111 qpair failed and we were unable to recover it. 00:30:55.111 [2024-11-28 08:29:52.231199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.111 [2024-11-28 08:29:52.231229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.111 qpair failed and we were unable to recover it. 00:30:55.111 [2024-11-28 08:29:52.231580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.111 [2024-11-28 08:29:52.231611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.111 qpair failed and we were unable to recover it. 00:30:55.111 [2024-11-28 08:29:52.231961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.111 [2024-11-28 08:29:52.231990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.111 qpair failed and we were unable to recover it. 00:30:55.111 [2024-11-28 08:29:52.232312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.111 [2024-11-28 08:29:52.232341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.111 qpair failed and we were unable to recover it. 00:30:55.111 [2024-11-28 08:29:52.232679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.111 [2024-11-28 08:29:52.232708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.111 qpair failed and we were unable to recover it. 00:30:55.111 [2024-11-28 08:29:52.233079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.111 [2024-11-28 08:29:52.233108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.111 qpair failed and we were unable to recover it. 00:30:55.111 [2024-11-28 08:29:52.233472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.111 [2024-11-28 08:29:52.233503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.111 qpair failed and we were unable to recover it. 00:30:55.111 [2024-11-28 08:29:52.233891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.111 [2024-11-28 08:29:52.233922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.111 qpair failed and we were unable to recover it. 00:30:55.111 [2024-11-28 08:29:52.234279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.111 [2024-11-28 08:29:52.234311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.111 qpair failed and we were unable to recover it. 00:30:55.111 [2024-11-28 08:29:52.234676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.111 [2024-11-28 08:29:52.234706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.111 qpair failed and we were unable to recover it. 00:30:55.111 [2024-11-28 08:29:52.235069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.111 [2024-11-28 08:29:52.235097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.111 qpair failed and we were unable to recover it. 00:30:55.111 [2024-11-28 08:29:52.235463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.111 [2024-11-28 08:29:52.235492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.111 qpair failed and we were unable to recover it. 00:30:55.111 [2024-11-28 08:29:52.235730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.111 [2024-11-28 08:29:52.235759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.111 qpair failed and we were unable to recover it. 00:30:55.111 [2024-11-28 08:29:52.236097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.111 [2024-11-28 08:29:52.236127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.111 qpair failed and we were unable to recover it. 00:30:55.111 [2024-11-28 08:29:52.236506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.111 [2024-11-28 08:29:52.236536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.111 qpair failed and we were unable to recover it. 00:30:55.111 [2024-11-28 08:29:52.236884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.111 [2024-11-28 08:29:52.236915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.111 qpair failed and we were unable to recover it. 00:30:55.111 [2024-11-28 08:29:52.237261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.111 [2024-11-28 08:29:52.237291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.111 qpair failed and we were unable to recover it. 00:30:55.111 [2024-11-28 08:29:52.237633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.111 [2024-11-28 08:29:52.237663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.111 qpair failed and we were unable to recover it. 00:30:55.111 [2024-11-28 08:29:52.238039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.112 [2024-11-28 08:29:52.238068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.112 qpair failed and we were unable to recover it. 00:30:55.112 [2024-11-28 08:29:52.238448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.112 [2024-11-28 08:29:52.238479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.112 qpair failed and we were unable to recover it. 00:30:55.112 [2024-11-28 08:29:52.238838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.112 [2024-11-28 08:29:52.238869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.112 qpair failed and we were unable to recover it. 00:30:55.112 [2024-11-28 08:29:52.239231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.112 [2024-11-28 08:29:52.239261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.112 qpair failed and we were unable to recover it. 00:30:55.112 [2024-11-28 08:29:52.239625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.112 [2024-11-28 08:29:52.239655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.112 qpair failed and we were unable to recover it. 00:30:55.112 [2024-11-28 08:29:52.240001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.112 [2024-11-28 08:29:52.240031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.112 qpair failed and we were unable to recover it. 00:30:55.112 [2024-11-28 08:29:52.240364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.112 [2024-11-28 08:29:52.240394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.112 qpair failed and we were unable to recover it. 00:30:55.112 [2024-11-28 08:29:52.240759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.112 [2024-11-28 08:29:52.240790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.112 qpair failed and we were unable to recover it. 00:30:55.112 [2024-11-28 08:29:52.241177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.112 [2024-11-28 08:29:52.241207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.112 qpair failed and we were unable to recover it. 00:30:55.112 [2024-11-28 08:29:52.241619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.112 [2024-11-28 08:29:52.241648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.112 qpair failed and we were unable to recover it. 00:30:55.112 [2024-11-28 08:29:52.241902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.112 [2024-11-28 08:29:52.241931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.112 qpair failed and we were unable to recover it. 00:30:55.112 [2024-11-28 08:29:52.242299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.112 [2024-11-28 08:29:52.242331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.112 qpair failed and we were unable to recover it. 00:30:55.112 [2024-11-28 08:29:52.242672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.112 [2024-11-28 08:29:52.242702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.112 qpair failed and we were unable to recover it. 00:30:55.112 [2024-11-28 08:29:52.243081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.112 [2024-11-28 08:29:52.243111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.112 qpair failed and we were unable to recover it. 00:30:55.112 [2024-11-28 08:29:52.243472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.112 [2024-11-28 08:29:52.243502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.112 qpair failed and we were unable to recover it. 00:30:55.112 [2024-11-28 08:29:52.243858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.112 [2024-11-28 08:29:52.243887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.112 qpair failed and we were unable to recover it. 00:30:55.112 [2024-11-28 08:29:52.244260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.112 [2024-11-28 08:29:52.244290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.112 qpair failed and we were unable to recover it. 00:30:55.112 [2024-11-28 08:29:52.244650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.112 [2024-11-28 08:29:52.244679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.112 qpair failed and we were unable to recover it. 00:30:55.112 [2024-11-28 08:29:52.245033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.112 [2024-11-28 08:29:52.245063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.112 qpair failed and we were unable to recover it. 00:30:55.112 [2024-11-28 08:29:52.245404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.112 [2024-11-28 08:29:52.245436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.112 qpair failed and we were unable to recover it. 00:30:55.112 [2024-11-28 08:29:52.245807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.112 [2024-11-28 08:29:52.245836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.112 qpair failed and we were unable to recover it. 00:30:55.112 [2024-11-28 08:29:52.246209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.112 [2024-11-28 08:29:52.246241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.112 qpair failed and we were unable to recover it. 00:30:55.112 [2024-11-28 08:29:52.246626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.112 [2024-11-28 08:29:52.246655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.112 qpair failed and we were unable to recover it. 00:30:55.112 [2024-11-28 08:29:52.247005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.112 [2024-11-28 08:29:52.247033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.112 qpair failed and we were unable to recover it. 00:30:55.112 [2024-11-28 08:29:52.247404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.112 [2024-11-28 08:29:52.247434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.112 qpair failed and we were unable to recover it. 00:30:55.112 [2024-11-28 08:29:52.247775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.113 [2024-11-28 08:29:52.247804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.113 qpair failed and we were unable to recover it. 00:30:55.113 [2024-11-28 08:29:52.248180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.113 [2024-11-28 08:29:52.248210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.113 qpair failed and we were unable to recover it. 00:30:55.113 [2024-11-28 08:29:52.248608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.113 [2024-11-28 08:29:52.248637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.113 qpair failed and we were unable to recover it. 00:30:55.113 [2024-11-28 08:29:52.248980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.113 [2024-11-28 08:29:52.249011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.113 qpair failed and we were unable to recover it. 00:30:55.113 [2024-11-28 08:29:52.249266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.113 [2024-11-28 08:29:52.249297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.113 qpair failed and we were unable to recover it. 00:30:55.113 [2024-11-28 08:29:52.249661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.113 [2024-11-28 08:29:52.249691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.113 qpair failed and we were unable to recover it. 00:30:55.113 [2024-11-28 08:29:52.250063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.113 [2024-11-28 08:29:52.250092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.113 qpair failed and we were unable to recover it. 00:30:55.113 [2024-11-28 08:29:52.250464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.113 [2024-11-28 08:29:52.250495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.113 qpair failed and we were unable to recover it. 00:30:55.113 [2024-11-28 08:29:52.250864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.113 [2024-11-28 08:29:52.250893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.113 qpair failed and we were unable to recover it. 00:30:55.113 [2024-11-28 08:29:52.251258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.113 [2024-11-28 08:29:52.251289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.113 qpair failed and we were unable to recover it. 00:30:55.113 [2024-11-28 08:29:52.251637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.113 [2024-11-28 08:29:52.251668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.113 qpair failed and we were unable to recover it. 00:30:55.113 [2024-11-28 08:29:52.252021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.113 [2024-11-28 08:29:52.252050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.113 qpair failed and we were unable to recover it. 00:30:55.113 [2024-11-28 08:29:52.252418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.113 [2024-11-28 08:29:52.252448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.113 qpair failed and we were unable to recover it. 00:30:55.113 [2024-11-28 08:29:52.252702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.113 [2024-11-28 08:29:52.252735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.113 qpair failed and we were unable to recover it. 00:30:55.113 [2024-11-28 08:29:52.253118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.113 [2024-11-28 08:29:52.253148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.113 qpair failed and we were unable to recover it. 00:30:55.113 [2024-11-28 08:29:52.253514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.113 [2024-11-28 08:29:52.253545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.113 qpair failed and we were unable to recover it. 00:30:55.113 [2024-11-28 08:29:52.253929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.113 [2024-11-28 08:29:52.253960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.113 qpair failed and we were unable to recover it. 00:30:55.113 [2024-11-28 08:29:52.254324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.113 [2024-11-28 08:29:52.254355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.113 qpair failed and we were unable to recover it. 00:30:55.113 [2024-11-28 08:29:52.254693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.113 [2024-11-28 08:29:52.254723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.113 qpair failed and we were unable to recover it. 00:30:55.113 [2024-11-28 08:29:52.255103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.113 [2024-11-28 08:29:52.255133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.113 qpair failed and we were unable to recover it. 00:30:55.113 [2024-11-28 08:29:52.255483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.113 [2024-11-28 08:29:52.255519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.113 qpair failed and we were unable to recover it. 00:30:55.113 [2024-11-28 08:29:52.255878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.113 [2024-11-28 08:29:52.255907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.113 qpair failed and we were unable to recover it. 00:30:55.113 [2024-11-28 08:29:52.256270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.113 [2024-11-28 08:29:52.256302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.113 qpair failed and we were unable to recover it. 00:30:55.113 [2024-11-28 08:29:52.256743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.113 [2024-11-28 08:29:52.256772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.113 qpair failed and we were unable to recover it. 00:30:55.113 [2024-11-28 08:29:52.257135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.113 [2024-11-28 08:29:52.257172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.113 qpair failed and we were unable to recover it. 00:30:55.113 [2024-11-28 08:29:52.257541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.113 [2024-11-28 08:29:52.257571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.113 qpair failed and we were unable to recover it. 00:30:55.113 [2024-11-28 08:29:52.257866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.113 [2024-11-28 08:29:52.257895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.113 qpair failed and we were unable to recover it. 00:30:55.113 [2024-11-28 08:29:52.258279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.113 [2024-11-28 08:29:52.258308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.113 qpair failed and we were unable to recover it. 00:30:55.113 [2024-11-28 08:29:52.258651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.113 [2024-11-28 08:29:52.258681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.113 qpair failed and we were unable to recover it. 00:30:55.113 [2024-11-28 08:29:52.259031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.113 [2024-11-28 08:29:52.259061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.113 qpair failed and we were unable to recover it. 00:30:55.113 [2024-11-28 08:29:52.259482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.113 [2024-11-28 08:29:52.259512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.114 qpair failed and we were unable to recover it. 00:30:55.114 [2024-11-28 08:29:52.259859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.114 [2024-11-28 08:29:52.259889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.114 qpair failed and we were unable to recover it. 00:30:55.114 [2024-11-28 08:29:52.260228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.114 [2024-11-28 08:29:52.260258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.114 qpair failed and we were unable to recover it. 00:30:55.114 [2024-11-28 08:29:52.260619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.114 [2024-11-28 08:29:52.260648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.114 qpair failed and we were unable to recover it. 00:30:55.114 [2024-11-28 08:29:52.261020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.114 [2024-11-28 08:29:52.261050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.114 qpair failed and we were unable to recover it. 00:30:55.114 [2024-11-28 08:29:52.261396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.114 [2024-11-28 08:29:52.261426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.114 qpair failed and we were unable to recover it. 00:30:55.114 [2024-11-28 08:29:52.261787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.114 [2024-11-28 08:29:52.261816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.114 qpair failed and we were unable to recover it. 00:30:55.114 [2024-11-28 08:29:52.262052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.114 [2024-11-28 08:29:52.262081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.114 qpair failed and we were unable to recover it. 00:30:55.114 [2024-11-28 08:29:52.262437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.114 [2024-11-28 08:29:52.262468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.114 qpair failed and we were unable to recover it. 00:30:55.114 [2024-11-28 08:29:52.262832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.114 [2024-11-28 08:29:52.262862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.114 qpair failed and we were unable to recover it. 00:30:55.114 [2024-11-28 08:29:52.263232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.114 [2024-11-28 08:29:52.263263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.114 qpair failed and we were unable to recover it. 00:30:55.114 [2024-11-28 08:29:52.263615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.114 [2024-11-28 08:29:52.263644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.114 qpair failed and we were unable to recover it. 00:30:55.114 [2024-11-28 08:29:52.263982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.114 [2024-11-28 08:29:52.264010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.114 qpair failed and we were unable to recover it. 00:30:55.114 [2024-11-28 08:29:52.264380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.114 [2024-11-28 08:29:52.264411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.114 qpair failed and we were unable to recover it. 00:30:55.114 [2024-11-28 08:29:52.264766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.114 [2024-11-28 08:29:52.264795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.114 qpair failed and we were unable to recover it. 00:30:55.114 [2024-11-28 08:29:52.265091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.114 [2024-11-28 08:29:52.265120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.114 qpair failed and we were unable to recover it. 00:30:55.114 [2024-11-28 08:29:52.265470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.114 [2024-11-28 08:29:52.265501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.114 qpair failed and we were unable to recover it. 00:30:55.114 [2024-11-28 08:29:52.265862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.114 [2024-11-28 08:29:52.265898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.114 qpair failed and we were unable to recover it. 00:30:55.114 [2024-11-28 08:29:52.266264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.114 [2024-11-28 08:29:52.266294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.114 qpair failed and we were unable to recover it. 00:30:55.114 [2024-11-28 08:29:52.266662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.114 [2024-11-28 08:29:52.266691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.114 qpair failed and we were unable to recover it. 00:30:55.114 [2024-11-28 08:29:52.267051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.114 [2024-11-28 08:29:52.267079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.114 qpair failed and we were unable to recover it. 00:30:55.114 [2024-11-28 08:29:52.267448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.114 [2024-11-28 08:29:52.267478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.114 qpair failed and we were unable to recover it. 00:30:55.114 [2024-11-28 08:29:52.267838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.114 [2024-11-28 08:29:52.267868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.114 qpair failed and we were unable to recover it. 00:30:55.114 [2024-11-28 08:29:52.268232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.114 [2024-11-28 08:29:52.268262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.114 qpair failed and we were unable to recover it. 00:30:55.114 [2024-11-28 08:29:52.268645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.114 [2024-11-28 08:29:52.268674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.114 qpair failed and we were unable to recover it. 00:30:55.114 [2024-11-28 08:29:52.269033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.114 [2024-11-28 08:29:52.269061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.114 qpair failed and we were unable to recover it. 00:30:55.114 [2024-11-28 08:29:52.269431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.114 [2024-11-28 08:29:52.269460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.114 qpair failed and we were unable to recover it. 00:30:55.114 [2024-11-28 08:29:52.269792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.114 [2024-11-28 08:29:52.269821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.114 qpair failed and we were unable to recover it. 00:30:55.114 [2024-11-28 08:29:52.270190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.114 [2024-11-28 08:29:52.270221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.114 qpair failed and we were unable to recover it. 00:30:55.114 [2024-11-28 08:29:52.270585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.114 [2024-11-28 08:29:52.270615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.114 qpair failed and we were unable to recover it. 00:30:55.114 [2024-11-28 08:29:52.270968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.114 [2024-11-28 08:29:52.270998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.114 qpair failed and we were unable to recover it. 00:30:55.115 [2024-11-28 08:29:52.271257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.115 [2024-11-28 08:29:52.271287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.115 qpair failed and we were unable to recover it. 00:30:55.115 [2024-11-28 08:29:52.271656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.115 [2024-11-28 08:29:52.271686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.115 qpair failed and we were unable to recover it. 00:30:55.115 [2024-11-28 08:29:52.272060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.115 [2024-11-28 08:29:52.272090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.115 qpair failed and we were unable to recover it. 00:30:55.115 [2024-11-28 08:29:52.272453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.115 [2024-11-28 08:29:52.272483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.115 qpair failed and we were unable to recover it. 00:30:55.115 [2024-11-28 08:29:52.272861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.115 [2024-11-28 08:29:52.272890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.115 qpair failed and we were unable to recover it. 00:30:55.115 [2024-11-28 08:29:52.273238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.115 [2024-11-28 08:29:52.273268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.115 qpair failed and we were unable to recover it. 00:30:55.115 [2024-11-28 08:29:52.273644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.115 [2024-11-28 08:29:52.273673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.115 qpair failed and we were unable to recover it. 00:30:55.115 [2024-11-28 08:29:52.274012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.115 [2024-11-28 08:29:52.274041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.115 qpair failed and we were unable to recover it. 00:30:55.115 [2024-11-28 08:29:52.274388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.115 [2024-11-28 08:29:52.274419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.115 qpair failed and we were unable to recover it. 00:30:55.115 [2024-11-28 08:29:52.274776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.115 [2024-11-28 08:29:52.274805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.115 qpair failed and we were unable to recover it. 00:30:55.115 [2024-11-28 08:29:52.275172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.115 [2024-11-28 08:29:52.275202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.115 qpair failed and we were unable to recover it. 00:30:55.115 [2024-11-28 08:29:52.275567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.115 [2024-11-28 08:29:52.275596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.115 qpair failed and we were unable to recover it. 00:30:55.115 [2024-11-28 08:29:52.275945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.115 [2024-11-28 08:29:52.275974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.115 qpair failed and we were unable to recover it. 00:30:55.115 [2024-11-28 08:29:52.276388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.115 [2024-11-28 08:29:52.276418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.115 qpair failed and we were unable to recover it. 00:30:55.115 [2024-11-28 08:29:52.276788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.115 [2024-11-28 08:29:52.276819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.115 qpair failed and we were unable to recover it. 00:30:55.115 [2024-11-28 08:29:52.277191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.115 [2024-11-28 08:29:52.277221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.115 qpair failed and we were unable to recover it. 00:30:55.115 [2024-11-28 08:29:52.277567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.115 [2024-11-28 08:29:52.277596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.115 qpair failed and we were unable to recover it. 00:30:55.115 [2024-11-28 08:29:52.277829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.115 [2024-11-28 08:29:52.277857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.115 qpair failed and we were unable to recover it. 00:30:55.115 [2024-11-28 08:29:52.278236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.115 [2024-11-28 08:29:52.278266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.115 qpair failed and we were unable to recover it. 00:30:55.115 [2024-11-28 08:29:52.278663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.115 [2024-11-28 08:29:52.278693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.115 qpair failed and we were unable to recover it. 00:30:55.115 [2024-11-28 08:29:52.279099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.115 [2024-11-28 08:29:52.279129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.115 qpair failed and we were unable to recover it. 00:30:55.115 [2024-11-28 08:29:52.279501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.115 [2024-11-28 08:29:52.279533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.115 qpair failed and we were unable to recover it. 00:30:55.115 [2024-11-28 08:29:52.279887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.115 [2024-11-28 08:29:52.279917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.115 qpair failed and we were unable to recover it. 00:30:55.115 [2024-11-28 08:29:52.280271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.115 [2024-11-28 08:29:52.280302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.115 qpair failed and we were unable to recover it. 00:30:55.115 [2024-11-28 08:29:52.280659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.115 [2024-11-28 08:29:52.280688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.115 qpair failed and we were unable to recover it. 00:30:55.115 [2024-11-28 08:29:52.281048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.115 [2024-11-28 08:29:52.281078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.115 qpair failed and we were unable to recover it. 00:30:55.115 [2024-11-28 08:29:52.281445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.115 [2024-11-28 08:29:52.281475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.115 qpair failed and we were unable to recover it. 00:30:55.115 [2024-11-28 08:29:52.281830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.115 [2024-11-28 08:29:52.281860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.115 qpair failed and we were unable to recover it. 00:30:55.115 [2024-11-28 08:29:52.282224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.115 [2024-11-28 08:29:52.282254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.115 qpair failed and we were unable to recover it. 00:30:55.115 [2024-11-28 08:29:52.282590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.115 [2024-11-28 08:29:52.282620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.115 qpair failed and we were unable to recover it. 00:30:55.116 [2024-11-28 08:29:52.282995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.116 [2024-11-28 08:29:52.283025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.116 qpair failed and we were unable to recover it. 00:30:55.116 [2024-11-28 08:29:52.283389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.116 [2024-11-28 08:29:52.283420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.116 qpair failed and we were unable to recover it. 00:30:55.116 [2024-11-28 08:29:52.283796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.116 [2024-11-28 08:29:52.283826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.116 qpair failed and we were unable to recover it. 00:30:55.116 [2024-11-28 08:29:52.284188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.116 [2024-11-28 08:29:52.284221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.116 qpair failed and we were unable to recover it. 00:30:55.116 [2024-11-28 08:29:52.284462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.116 [2024-11-28 08:29:52.284495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.116 qpair failed and we were unable to recover it. 00:30:55.116 [2024-11-28 08:29:52.284831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.116 [2024-11-28 08:29:52.284861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.116 qpair failed and we were unable to recover it. 00:30:55.116 [2024-11-28 08:29:52.285236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.116 [2024-11-28 08:29:52.285266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.116 qpair failed and we were unable to recover it. 00:30:55.116 [2024-11-28 08:29:52.285663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.116 [2024-11-28 08:29:52.285691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.116 qpair failed and we were unable to recover it. 00:30:55.116 [2024-11-28 08:29:52.286051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.116 [2024-11-28 08:29:52.286080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.116 qpair failed and we were unable to recover it. 00:30:55.116 [2024-11-28 08:29:52.286447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.116 [2024-11-28 08:29:52.286478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.116 qpair failed and we were unable to recover it. 00:30:55.116 [2024-11-28 08:29:52.286914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.116 [2024-11-28 08:29:52.286944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.116 qpair failed and we were unable to recover it. 00:30:55.116 [2024-11-28 08:29:52.287303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.116 [2024-11-28 08:29:52.287333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.116 qpair failed and we were unable to recover it. 00:30:55.116 [2024-11-28 08:29:52.287706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.116 [2024-11-28 08:29:52.287737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.116 qpair failed and we were unable to recover it. 00:30:55.116 [2024-11-28 08:29:52.288106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.116 [2024-11-28 08:29:52.288136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.116 qpair failed and we were unable to recover it. 00:30:55.116 [2024-11-28 08:29:52.288414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.116 [2024-11-28 08:29:52.288443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.116 qpair failed and we were unable to recover it. 00:30:55.116 [2024-11-28 08:29:52.288820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.116 [2024-11-28 08:29:52.288849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.116 qpair failed and we were unable to recover it. 00:30:55.116 [2024-11-28 08:29:52.289213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.116 [2024-11-28 08:29:52.289245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.116 qpair failed and we were unable to recover it. 00:30:55.116 [2024-11-28 08:29:52.289652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.116 [2024-11-28 08:29:52.289681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.116 qpair failed and we were unable to recover it. 00:30:55.116 [2024-11-28 08:29:52.290043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.116 [2024-11-28 08:29:52.290074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.116 qpair failed and we were unable to recover it. 00:30:55.116 [2024-11-28 08:29:52.290433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.116 [2024-11-28 08:29:52.290464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.116 qpair failed and we were unable to recover it. 00:30:55.116 [2024-11-28 08:29:52.290727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.116 [2024-11-28 08:29:52.290759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.116 qpair failed and we were unable to recover it. 00:30:55.116 [2024-11-28 08:29:52.291190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.116 [2024-11-28 08:29:52.291221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.116 qpair failed and we were unable to recover it. 00:30:55.116 [2024-11-28 08:29:52.291621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.116 [2024-11-28 08:29:52.291650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.116 qpair failed and we were unable to recover it. 00:30:55.116 [2024-11-28 08:29:52.292009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.116 [2024-11-28 08:29:52.292038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.116 qpair failed and we were unable to recover it. 00:30:55.116 [2024-11-28 08:29:52.292386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.116 [2024-11-28 08:29:52.292423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.116 qpair failed and we were unable to recover it. 00:30:55.116 [2024-11-28 08:29:52.292765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.116 [2024-11-28 08:29:52.292796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.116 qpair failed and we were unable to recover it. 00:30:55.116 [2024-11-28 08:29:52.293049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.116 [2024-11-28 08:29:52.293078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.116 qpair failed and we were unable to recover it. 00:30:55.116 [2024-11-28 08:29:52.293482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.116 [2024-11-28 08:29:52.293514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.116 qpair failed and we were unable to recover it. 00:30:55.116 [2024-11-28 08:29:52.293884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.116 [2024-11-28 08:29:52.293914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.117 qpair failed and we were unable to recover it. 00:30:55.117 [2024-11-28 08:29:52.294281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.117 [2024-11-28 08:29:52.294311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.117 qpair failed and we were unable to recover it. 00:30:55.117 [2024-11-28 08:29:52.294678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.117 [2024-11-28 08:29:52.294707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.117 qpair failed and we were unable to recover it. 00:30:55.117 [2024-11-28 08:29:52.295070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.117 [2024-11-28 08:29:52.295100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.117 qpair failed and we were unable to recover it. 00:30:55.117 [2024-11-28 08:29:52.295485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.117 [2024-11-28 08:29:52.295517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.117 qpair failed and we were unable to recover it. 00:30:55.117 [2024-11-28 08:29:52.295765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.117 [2024-11-28 08:29:52.295798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.117 qpair failed and we were unable to recover it. 00:30:55.117 [2024-11-28 08:29:52.296148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.117 [2024-11-28 08:29:52.296187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.117 qpair failed and we were unable to recover it. 00:30:55.117 [2024-11-28 08:29:52.296593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.117 [2024-11-28 08:29:52.296624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.117 qpair failed and we were unable to recover it. 00:30:55.117 [2024-11-28 08:29:52.296987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.117 [2024-11-28 08:29:52.297018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.117 qpair failed and we were unable to recover it. 00:30:55.117 [2024-11-28 08:29:52.297387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.117 [2024-11-28 08:29:52.297418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.117 qpair failed and we were unable to recover it. 00:30:55.117 [2024-11-28 08:29:52.297765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.117 [2024-11-28 08:29:52.297795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.117 qpair failed and we were unable to recover it. 00:30:55.117 [2024-11-28 08:29:52.298126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.117 [2024-11-28 08:29:52.298156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.117 qpair failed and we were unable to recover it. 00:30:55.117 [2024-11-28 08:29:52.298520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.117 [2024-11-28 08:29:52.298550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.117 qpair failed and we were unable to recover it. 00:30:55.117 [2024-11-28 08:29:52.298921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.117 [2024-11-28 08:29:52.298950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.117 qpair failed and we were unable to recover it. 00:30:55.117 [2024-11-28 08:29:52.299321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.117 [2024-11-28 08:29:52.299352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.117 qpair failed and we were unable to recover it. 00:30:55.117 [2024-11-28 08:29:52.299713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.117 [2024-11-28 08:29:52.299741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.117 qpair failed and we were unable to recover it. 00:30:55.117 [2024-11-28 08:29:52.300000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.117 [2024-11-28 08:29:52.300029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.117 qpair failed and we were unable to recover it. 00:30:55.117 [2024-11-28 08:29:52.300354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.117 [2024-11-28 08:29:52.300384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.117 qpair failed and we were unable to recover it. 00:30:55.117 [2024-11-28 08:29:52.300745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.117 [2024-11-28 08:29:52.300774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.117 qpair failed and we were unable to recover it. 00:30:55.117 [2024-11-28 08:29:52.301134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.117 [2024-11-28 08:29:52.301176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.117 qpair failed and we were unable to recover it. 00:30:55.117 [2024-11-28 08:29:52.301522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.117 [2024-11-28 08:29:52.301551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.117 qpair failed and we were unable to recover it. 00:30:55.117 [2024-11-28 08:29:52.301912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.117 [2024-11-28 08:29:52.301941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.117 qpair failed and we were unable to recover it. 00:30:55.117 [2024-11-28 08:29:52.302311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.117 [2024-11-28 08:29:52.302342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.117 qpair failed and we were unable to recover it. 00:30:55.117 [2024-11-28 08:29:52.302691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.117 [2024-11-28 08:29:52.302726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.117 qpair failed and we were unable to recover it. 00:30:55.117 [2024-11-28 08:29:52.303066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.117 [2024-11-28 08:29:52.303094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.117 qpair failed and we were unable to recover it. 00:30:55.117 [2024-11-28 08:29:52.303433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.117 [2024-11-28 08:29:52.303464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.117 qpair failed and we were unable to recover it. 00:30:55.117 [2024-11-28 08:29:52.303828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.117 [2024-11-28 08:29:52.303858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.117 qpair failed and we were unable to recover it. 00:30:55.117 [2024-11-28 08:29:52.304231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.118 [2024-11-28 08:29:52.304262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.118 qpair failed and we were unable to recover it. 00:30:55.118 [2024-11-28 08:29:52.304625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.118 [2024-11-28 08:29:52.304656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.118 qpair failed and we were unable to recover it. 00:30:55.118 [2024-11-28 08:29:52.305042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.118 [2024-11-28 08:29:52.305071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.118 qpair failed and we were unable to recover it. 00:30:55.118 [2024-11-28 08:29:52.305403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.118 [2024-11-28 08:29:52.305434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.118 qpair failed and we were unable to recover it. 00:30:55.118 [2024-11-28 08:29:52.305802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.118 [2024-11-28 08:29:52.305831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.118 qpair failed and we were unable to recover it. 00:30:55.118 [2024-11-28 08:29:52.306195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.118 [2024-11-28 08:29:52.306226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.118 qpair failed and we were unable to recover it. 00:30:55.118 [2024-11-28 08:29:52.306588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.118 [2024-11-28 08:29:52.306617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.118 qpair failed and we were unable to recover it. 00:30:55.118 [2024-11-28 08:29:52.306979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.118 [2024-11-28 08:29:52.307008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.118 qpair failed and we were unable to recover it. 00:30:55.118 [2024-11-28 08:29:52.307386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.118 [2024-11-28 08:29:52.307416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.118 qpair failed and we were unable to recover it. 00:30:55.118 [2024-11-28 08:29:52.307784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.118 [2024-11-28 08:29:52.307813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.118 qpair failed and we were unable to recover it. 00:30:55.118 [2024-11-28 08:29:52.308256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.118 [2024-11-28 08:29:52.308288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.118 qpair failed and we were unable to recover it. 00:30:55.118 [2024-11-28 08:29:52.308647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.118 [2024-11-28 08:29:52.308676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.118 qpair failed and we were unable to recover it. 00:30:55.118 [2024-11-28 08:29:52.309039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.118 [2024-11-28 08:29:52.309069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.118 qpair failed and we were unable to recover it. 00:30:55.118 [2024-11-28 08:29:52.309452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.118 [2024-11-28 08:29:52.309482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.118 qpair failed and we were unable to recover it. 00:30:55.118 [2024-11-28 08:29:52.309846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.118 [2024-11-28 08:29:52.309876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.118 qpair failed and we were unable to recover it. 00:30:55.118 [2024-11-28 08:29:52.310250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.118 [2024-11-28 08:29:52.310281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.118 qpair failed and we were unable to recover it. 00:30:55.118 [2024-11-28 08:29:52.310658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.118 [2024-11-28 08:29:52.310686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.118 qpair failed and we were unable to recover it. 00:30:55.118 [2024-11-28 08:29:52.311048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.118 [2024-11-28 08:29:52.311077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.118 qpair failed and we were unable to recover it. 00:30:55.118 [2024-11-28 08:29:52.311433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.118 [2024-11-28 08:29:52.311463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.118 qpair failed and we were unable to recover it. 00:30:55.118 [2024-11-28 08:29:52.311800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.118 [2024-11-28 08:29:52.311830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.118 qpair failed and we were unable to recover it. 00:30:55.118 [2024-11-28 08:29:52.312191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.118 [2024-11-28 08:29:52.312221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.118 qpair failed and we were unable to recover it. 00:30:55.118 [2024-11-28 08:29:52.312580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.118 [2024-11-28 08:29:52.312611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.118 qpair failed and we were unable to recover it. 00:30:55.118 [2024-11-28 08:29:52.312965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.118 [2024-11-28 08:29:52.312996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.118 qpair failed and we were unable to recover it. 00:30:55.118 [2024-11-28 08:29:52.313271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.118 [2024-11-28 08:29:52.313307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.118 qpair failed and we were unable to recover it. 00:30:55.118 [2024-11-28 08:29:52.313668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.118 [2024-11-28 08:29:52.313696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.118 qpair failed and we were unable to recover it. 00:30:55.118 [2024-11-28 08:29:52.314060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.118 [2024-11-28 08:29:52.314090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.118 qpair failed and we were unable to recover it. 00:30:55.118 [2024-11-28 08:29:52.314435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.118 [2024-11-28 08:29:52.314465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.118 qpair failed and we were unable to recover it. 00:30:55.118 [2024-11-28 08:29:52.314828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.119 [2024-11-28 08:29:52.314858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.119 qpair failed and we were unable to recover it. 00:30:55.119 [2024-11-28 08:29:52.315223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.119 [2024-11-28 08:29:52.315253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.119 qpair failed and we were unable to recover it. 00:30:55.119 [2024-11-28 08:29:52.315509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.119 [2024-11-28 08:29:52.315538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.119 qpair failed and we were unable to recover it. 00:30:55.119 [2024-11-28 08:29:52.315890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.119 [2024-11-28 08:29:52.315919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.119 qpair failed and we were unable to recover it. 00:30:55.119 [2024-11-28 08:29:52.316287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.119 [2024-11-28 08:29:52.316317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.119 qpair failed and we were unable to recover it. 00:30:55.119 [2024-11-28 08:29:52.316701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.119 [2024-11-28 08:29:52.316732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.119 qpair failed and we were unable to recover it. 00:30:55.119 [2024-11-28 08:29:52.317115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.119 [2024-11-28 08:29:52.317146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.119 qpair failed and we were unable to recover it. 00:30:55.119 [2024-11-28 08:29:52.317521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.119 [2024-11-28 08:29:52.317551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.119 qpair failed and we were unable to recover it. 00:30:55.119 [2024-11-28 08:29:52.317903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.119 [2024-11-28 08:29:52.317932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.119 qpair failed and we were unable to recover it. 00:30:55.119 [2024-11-28 08:29:52.318262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.119 [2024-11-28 08:29:52.318292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.119 qpair failed and we were unable to recover it. 00:30:55.119 [2024-11-28 08:29:52.318639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.119 [2024-11-28 08:29:52.318669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.119 qpair failed and we were unable to recover it. 00:30:55.119 [2024-11-28 08:29:52.319038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.119 [2024-11-28 08:29:52.319066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.119 qpair failed and we were unable to recover it. 00:30:55.119 [2024-11-28 08:29:52.319406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.119 [2024-11-28 08:29:52.319437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.119 qpair failed and we were unable to recover it. 00:30:55.119 [2024-11-28 08:29:52.319777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.119 [2024-11-28 08:29:52.319807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.119 qpair failed and we were unable to recover it. 00:30:55.119 [2024-11-28 08:29:52.320175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.119 [2024-11-28 08:29:52.320206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.119 qpair failed and we were unable to recover it. 00:30:55.119 [2024-11-28 08:29:52.320474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.119 [2024-11-28 08:29:52.320503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.119 qpair failed and we were unable to recover it. 00:30:55.119 [2024-11-28 08:29:52.320882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.119 [2024-11-28 08:29:52.320910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.119 qpair failed and we were unable to recover it. 00:30:55.119 [2024-11-28 08:29:52.321254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.119 [2024-11-28 08:29:52.321284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.119 qpair failed and we were unable to recover it. 00:30:55.119 [2024-11-28 08:29:52.321651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.119 [2024-11-28 08:29:52.321681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.119 qpair failed and we were unable to recover it. 00:30:55.119 [2024-11-28 08:29:52.321929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.119 [2024-11-28 08:29:52.321961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.119 qpair failed and we were unable to recover it. 00:30:55.119 [2024-11-28 08:29:52.322325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.119 [2024-11-28 08:29:52.322356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.119 qpair failed and we were unable to recover it. 00:30:55.119 [2024-11-28 08:29:52.322792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.119 [2024-11-28 08:29:52.322821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.119 qpair failed and we were unable to recover it. 00:30:55.119 [2024-11-28 08:29:52.323169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.119 [2024-11-28 08:29:52.323200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.119 qpair failed and we were unable to recover it. 00:30:55.119 [2024-11-28 08:29:52.323608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.119 [2024-11-28 08:29:52.323637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.119 qpair failed and we were unable to recover it. 00:30:55.119 [2024-11-28 08:29:52.323994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.119 [2024-11-28 08:29:52.324024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.119 qpair failed and we were unable to recover it. 00:30:55.119 [2024-11-28 08:29:52.324393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.119 [2024-11-28 08:29:52.324423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.119 qpair failed and we were unable to recover it. 00:30:55.119 [2024-11-28 08:29:52.324768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.119 [2024-11-28 08:29:52.324797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.119 qpair failed and we were unable to recover it. 00:30:55.119 [2024-11-28 08:29:52.325181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.119 [2024-11-28 08:29:52.325211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.119 qpair failed and we were unable to recover it. 00:30:55.119 [2024-11-28 08:29:52.325579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.119 [2024-11-28 08:29:52.325609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.119 qpair failed and we were unable to recover it. 00:30:55.119 [2024-11-28 08:29:52.325941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.120 [2024-11-28 08:29:52.325971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.120 qpair failed and we were unable to recover it. 00:30:55.120 [2024-11-28 08:29:52.326335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.120 [2024-11-28 08:29:52.326367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.120 qpair failed and we were unable to recover it. 00:30:55.120 [2024-11-28 08:29:52.326739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.120 [2024-11-28 08:29:52.326768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.120 qpair failed and we were unable to recover it. 00:30:55.120 [2024-11-28 08:29:52.327195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.120 [2024-11-28 08:29:52.327225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.120 qpair failed and we were unable to recover it. 00:30:55.120 [2024-11-28 08:29:52.327635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.120 [2024-11-28 08:29:52.327664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.120 qpair failed and we were unable to recover it. 00:30:55.120 [2024-11-28 08:29:52.328030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.120 [2024-11-28 08:29:52.328060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.120 qpair failed and we were unable to recover it. 00:30:55.120 [2024-11-28 08:29:52.328429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.120 [2024-11-28 08:29:52.328459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.120 qpair failed and we were unable to recover it. 00:30:55.120 [2024-11-28 08:29:52.328818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.120 [2024-11-28 08:29:52.328849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.120 qpair failed and we were unable to recover it. 00:30:55.120 [2024-11-28 08:29:52.329217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.120 [2024-11-28 08:29:52.329248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.120 qpair failed and we were unable to recover it. 00:30:55.120 [2024-11-28 08:29:52.329625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.120 [2024-11-28 08:29:52.329654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.120 qpair failed and we were unable to recover it. 00:30:55.120 [2024-11-28 08:29:52.330076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.120 [2024-11-28 08:29:52.330104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.120 qpair failed and we were unable to recover it. 00:30:55.120 [2024-11-28 08:29:52.330447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.120 [2024-11-28 08:29:52.330478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.120 qpair failed and we were unable to recover it. 00:30:55.120 [2024-11-28 08:29:52.330816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.120 [2024-11-28 08:29:52.330846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.120 qpair failed and we were unable to recover it. 00:30:55.120 [2024-11-28 08:29:52.331191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.120 [2024-11-28 08:29:52.331223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.120 qpair failed and we were unable to recover it. 00:30:55.120 [2024-11-28 08:29:52.331571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.120 [2024-11-28 08:29:52.331601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.120 qpair failed and we were unable to recover it. 00:30:55.120 [2024-11-28 08:29:52.331963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.120 [2024-11-28 08:29:52.331992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.120 qpair failed and we were unable to recover it. 00:30:55.120 [2024-11-28 08:29:52.332360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.120 [2024-11-28 08:29:52.332390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.120 qpair failed and we were unable to recover it. 00:30:55.120 [2024-11-28 08:29:52.332750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.120 [2024-11-28 08:29:52.332779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.120 qpair failed and we were unable to recover it. 00:30:55.120 [2024-11-28 08:29:52.333046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.120 [2024-11-28 08:29:52.333076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.120 qpair failed and we were unable to recover it. 00:30:55.120 [2024-11-28 08:29:52.333320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.120 [2024-11-28 08:29:52.333350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.120 qpair failed and we were unable to recover it. 00:30:55.120 [2024-11-28 08:29:52.333600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.120 [2024-11-28 08:29:52.333632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.120 qpair failed and we were unable to recover it. 00:30:55.120 [2024-11-28 08:29:52.334006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.120 [2024-11-28 08:29:52.334035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.120 qpair failed and we were unable to recover it. 00:30:55.120 [2024-11-28 08:29:52.334447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.120 [2024-11-28 08:29:52.334479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.120 qpair failed and we were unable to recover it. 00:30:55.120 [2024-11-28 08:29:52.334845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.120 [2024-11-28 08:29:52.334875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.120 qpair failed and we were unable to recover it. 00:30:55.120 [2024-11-28 08:29:52.335233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.120 [2024-11-28 08:29:52.335264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.120 qpair failed and we were unable to recover it. 00:30:55.120 [2024-11-28 08:29:52.335638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.120 [2024-11-28 08:29:52.335666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.120 qpair failed and we were unable to recover it. 00:30:55.120 [2024-11-28 08:29:52.335917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.120 [2024-11-28 08:29:52.335946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.120 qpair failed and we were unable to recover it. 00:30:55.120 [2024-11-28 08:29:52.336203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.120 [2024-11-28 08:29:52.336235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.120 qpair failed and we were unable to recover it. 00:30:55.120 [2024-11-28 08:29:52.336635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.120 [2024-11-28 08:29:52.336664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.120 qpair failed and we were unable to recover it. 00:30:55.121 [2024-11-28 08:29:52.336913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.121 [2024-11-28 08:29:52.336943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.121 qpair failed and we were unable to recover it. 00:30:55.121 [2024-11-28 08:29:52.337294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.121 [2024-11-28 08:29:52.337325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.121 qpair failed and we were unable to recover it. 00:30:55.121 [2024-11-28 08:29:52.337708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.121 [2024-11-28 08:29:52.337737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.121 qpair failed and we were unable to recover it. 00:30:55.121 [2024-11-28 08:29:52.338113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.121 [2024-11-28 08:29:52.338141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.121 qpair failed and we were unable to recover it. 00:30:55.121 [2024-11-28 08:29:52.338418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.121 [2024-11-28 08:29:52.338447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.121 qpair failed and we were unable to recover it. 00:30:55.121 [2024-11-28 08:29:52.338822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.121 [2024-11-28 08:29:52.338850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.121 qpair failed and we were unable to recover it. 00:30:55.121 [2024-11-28 08:29:52.339105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.121 [2024-11-28 08:29:52.339140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.121 qpair failed and we were unable to recover it. 00:30:55.121 [2024-11-28 08:29:52.339417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.121 [2024-11-28 08:29:52.339447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.121 qpair failed and we were unable to recover it. 00:30:55.121 [2024-11-28 08:29:52.339798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.121 [2024-11-28 08:29:52.339827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.121 qpair failed and we were unable to recover it. 00:30:55.121 [2024-11-28 08:29:52.340193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.121 [2024-11-28 08:29:52.340225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.121 qpair failed and we were unable to recover it. 00:30:55.121 [2024-11-28 08:29:52.340608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.121 [2024-11-28 08:29:52.340637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.121 qpair failed and we were unable to recover it. 00:30:55.121 [2024-11-28 08:29:52.340994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.121 [2024-11-28 08:29:52.341022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.121 qpair failed and we were unable to recover it. 00:30:55.121 [2024-11-28 08:29:52.341276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.121 [2024-11-28 08:29:52.341307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.121 qpair failed and we were unable to recover it. 00:30:55.121 [2024-11-28 08:29:52.341556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.121 [2024-11-28 08:29:52.341585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.121 qpair failed and we were unable to recover it. 00:30:55.121 [2024-11-28 08:29:52.341958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.121 [2024-11-28 08:29:52.341989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.121 qpair failed and we were unable to recover it. 00:30:55.121 [2024-11-28 08:29:52.342242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.121 [2024-11-28 08:29:52.342275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.121 qpair failed and we were unable to recover it. 00:30:55.121 [2024-11-28 08:29:52.342625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.121 [2024-11-28 08:29:52.342655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.121 qpair failed and we were unable to recover it. 00:30:55.121 [2024-11-28 08:29:52.342919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.121 [2024-11-28 08:29:52.342948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.121 qpair failed and we were unable to recover it. 00:30:55.121 [2024-11-28 08:29:52.343285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.121 [2024-11-28 08:29:52.343315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.121 qpair failed and we were unable to recover it. 00:30:55.121 [2024-11-28 08:29:52.343669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.121 [2024-11-28 08:29:52.343697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.121 qpair failed and we were unable to recover it. 00:30:55.121 [2024-11-28 08:29:52.344070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.121 [2024-11-28 08:29:52.344101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.121 qpair failed and we were unable to recover it. 00:30:55.121 [2024-11-28 08:29:52.344474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.121 [2024-11-28 08:29:52.344505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.121 qpair failed and we were unable to recover it. 00:30:55.121 [2024-11-28 08:29:52.344767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.121 [2024-11-28 08:29:52.344796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.121 qpair failed and we were unable to recover it. 00:30:55.121 [2024-11-28 08:29:52.345123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.121 [2024-11-28 08:29:52.345152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.121 qpair failed and we were unable to recover it. 00:30:55.121 [2024-11-28 08:29:52.345565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.121 [2024-11-28 08:29:52.345594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.121 qpair failed and we were unable to recover it. 00:30:55.121 [2024-11-28 08:29:52.345937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.121 [2024-11-28 08:29:52.345966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.121 qpair failed and we were unable to recover it. 00:30:55.121 [2024-11-28 08:29:52.346338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.121 [2024-11-28 08:29:52.346369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.121 qpair failed and we were unable to recover it. 00:30:55.121 [2024-11-28 08:29:52.346739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.121 [2024-11-28 08:29:52.346769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.121 qpair failed and we were unable to recover it. 00:30:55.121 [2024-11-28 08:29:52.347130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.121 [2024-11-28 08:29:52.347166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.122 qpair failed and we were unable to recover it. 00:30:55.122 [2024-11-28 08:29:52.347515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.122 [2024-11-28 08:29:52.347545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.122 qpair failed and we were unable to recover it. 00:30:55.122 [2024-11-28 08:29:52.347957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.122 [2024-11-28 08:29:52.347985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.122 qpair failed and we were unable to recover it. 00:30:55.122 [2024-11-28 08:29:52.348248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.122 [2024-11-28 08:29:52.348281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.122 qpair failed and we were unable to recover it. 00:30:55.122 [2024-11-28 08:29:52.348704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.122 [2024-11-28 08:29:52.348734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.122 qpair failed and we were unable to recover it. 00:30:55.122 [2024-11-28 08:29:52.348985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.122 [2024-11-28 08:29:52.349021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.122 qpair failed and we were unable to recover it. 00:30:55.122 [2024-11-28 08:29:52.349384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.122 [2024-11-28 08:29:52.349415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.122 qpair failed and we were unable to recover it. 00:30:55.122 [2024-11-28 08:29:52.349824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.122 [2024-11-28 08:29:52.349854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.122 qpair failed and we were unable to recover it. 00:30:55.122 [2024-11-28 08:29:52.350117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.122 [2024-11-28 08:29:52.350146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.122 qpair failed and we were unable to recover it. 00:30:55.122 [2024-11-28 08:29:52.350522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.122 [2024-11-28 08:29:52.350552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.122 qpair failed and we were unable to recover it. 00:30:55.122 [2024-11-28 08:29:52.350917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.122 [2024-11-28 08:29:52.350948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.122 qpair failed and we were unable to recover it. 00:30:55.122 [2024-11-28 08:29:52.351294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.122 [2024-11-28 08:29:52.351324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.122 qpair failed and we were unable to recover it. 00:30:55.122 [2024-11-28 08:29:52.351703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.122 [2024-11-28 08:29:52.351733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.122 qpair failed and we were unable to recover it. 00:30:55.122 [2024-11-28 08:29:52.352097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.122 [2024-11-28 08:29:52.352127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.122 qpair failed and we were unable to recover it. 00:30:55.122 [2024-11-28 08:29:52.352509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.122 [2024-11-28 08:29:52.352540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.122 qpair failed and we were unable to recover it. 00:30:55.122 [2024-11-28 08:29:52.352903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.122 [2024-11-28 08:29:52.352932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.122 qpair failed and we were unable to recover it. 00:30:55.122 [2024-11-28 08:29:52.353202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.122 [2024-11-28 08:29:52.353233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.122 qpair failed and we were unable to recover it. 00:30:55.122 [2024-11-28 08:29:52.353495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.122 [2024-11-28 08:29:52.353523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.122 qpair failed and we were unable to recover it. 00:30:55.122 [2024-11-28 08:29:52.353898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.122 [2024-11-28 08:29:52.353927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.122 qpair failed and we were unable to recover it. 00:30:55.122 [2024-11-28 08:29:52.354190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.122 [2024-11-28 08:29:52.354221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.122 qpair failed and we were unable to recover it. 00:30:55.122 [2024-11-28 08:29:52.354595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.122 [2024-11-28 08:29:52.354624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.122 qpair failed and we were unable to recover it. 00:30:55.122 [2024-11-28 08:29:52.355055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.122 [2024-11-28 08:29:52.355084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.122 qpair failed and we were unable to recover it. 00:30:55.122 [2024-11-28 08:29:52.355441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.122 [2024-11-28 08:29:52.355472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.122 qpair failed and we were unable to recover it. 00:30:55.122 [2024-11-28 08:29:52.355832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.122 [2024-11-28 08:29:52.355863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.122 qpair failed and we were unable to recover it. 00:30:55.122 [2024-11-28 08:29:52.356236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.122 [2024-11-28 08:29:52.356266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.122 qpair failed and we were unable to recover it. 00:30:55.122 [2024-11-28 08:29:52.356651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.122 [2024-11-28 08:29:52.356681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.122 qpair failed and we were unable to recover it. 00:30:55.122 [2024-11-28 08:29:52.356927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.122 [2024-11-28 08:29:52.356956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.122 qpair failed and we were unable to recover it. 00:30:55.122 [2024-11-28 08:29:52.357323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.122 [2024-11-28 08:29:52.357354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.122 qpair failed and we were unable to recover it. 00:30:55.122 [2024-11-28 08:29:52.357735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.122 [2024-11-28 08:29:52.357763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.122 qpair failed and we were unable to recover it. 00:30:55.122 [2024-11-28 08:29:52.358137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.122 [2024-11-28 08:29:52.358175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.122 qpair failed and we were unable to recover it. 00:30:55.122 [2024-11-28 08:29:52.358507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.122 [2024-11-28 08:29:52.358537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.122 qpair failed and we were unable to recover it. 00:30:55.123 [2024-11-28 08:29:52.358772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.123 [2024-11-28 08:29:52.358801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.123 qpair failed and we were unable to recover it. 00:30:55.123 [2024-11-28 08:29:52.359154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.123 [2024-11-28 08:29:52.359195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.123 qpair failed and we were unable to recover it. 00:30:55.123 [2024-11-28 08:29:52.359359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.123 [2024-11-28 08:29:52.359388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.123 qpair failed and we were unable to recover it. 00:30:55.123 [2024-11-28 08:29:52.359790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.123 [2024-11-28 08:29:52.359818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.123 qpair failed and we were unable to recover it. 00:30:55.123 [2024-11-28 08:29:52.360189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.123 [2024-11-28 08:29:52.360221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.123 qpair failed and we were unable to recover it. 00:30:55.123 [2024-11-28 08:29:52.360590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.123 [2024-11-28 08:29:52.360620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.123 qpair failed and we were unable to recover it. 00:30:55.123 [2024-11-28 08:29:52.361032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.123 [2024-11-28 08:29:52.361061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.123 qpair failed and we were unable to recover it. 00:30:55.123 [2024-11-28 08:29:52.361405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.123 [2024-11-28 08:29:52.361434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.123 qpair failed and we were unable to recover it. 00:30:55.123 [2024-11-28 08:29:52.361794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.123 [2024-11-28 08:29:52.361823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.123 qpair failed and we were unable to recover it. 00:30:55.123 [2024-11-28 08:29:52.362191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.123 [2024-11-28 08:29:52.362222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.123 qpair failed and we were unable to recover it. 00:30:55.123 [2024-11-28 08:29:52.362447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.123 [2024-11-28 08:29:52.362476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.123 qpair failed and we were unable to recover it. 00:30:55.123 [2024-11-28 08:29:52.362840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.123 [2024-11-28 08:29:52.362870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.123 qpair failed and we were unable to recover it. 00:30:55.123 [2024-11-28 08:29:52.363114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.123 [2024-11-28 08:29:52.363144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.123 qpair failed and we were unable to recover it. 00:30:55.123 [2024-11-28 08:29:52.363374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.123 [2024-11-28 08:29:52.363403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.123 qpair failed and we were unable to recover it. 00:30:55.123 [2024-11-28 08:29:52.363779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.123 [2024-11-28 08:29:52.363808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.123 qpair failed and we were unable to recover it. 00:30:55.123 [2024-11-28 08:29:52.364041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.123 [2024-11-28 08:29:52.364071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.123 qpair failed and we were unable to recover it. 00:30:55.123 [2024-11-28 08:29:52.364277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.123 [2024-11-28 08:29:52.364308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.123 qpair failed and we were unable to recover it. 00:30:55.123 [2024-11-28 08:29:52.364689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.123 [2024-11-28 08:29:52.364720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.123 qpair failed and we were unable to recover it. 00:30:55.123 [2024-11-28 08:29:52.365088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.123 [2024-11-28 08:29:52.365118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.123 qpair failed and we were unable to recover it. 00:30:55.123 [2024-11-28 08:29:52.365500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.123 [2024-11-28 08:29:52.365532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.123 qpair failed and we were unable to recover it. 00:30:55.123 [2024-11-28 08:29:52.365884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.123 [2024-11-28 08:29:52.365914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.123 qpair failed and we were unable to recover it. 00:30:55.123 [2024-11-28 08:29:52.366264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.123 [2024-11-28 08:29:52.366294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.123 qpair failed and we were unable to recover it. 00:30:55.123 [2024-11-28 08:29:52.366654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.123 [2024-11-28 08:29:52.366683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.123 qpair failed and we were unable to recover it. 00:30:55.123 [2024-11-28 08:29:52.367049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.123 [2024-11-28 08:29:52.367078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.123 qpair failed and we were unable to recover it. 00:30:55.123 [2024-11-28 08:29:52.367434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.123 [2024-11-28 08:29:52.367465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.123 qpair failed and we were unable to recover it. 00:30:55.123 [2024-11-28 08:29:52.367875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.123 [2024-11-28 08:29:52.367903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.123 qpair failed and we were unable to recover it. 00:30:55.123 [2024-11-28 08:29:52.368154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.123 [2024-11-28 08:29:52.368199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.123 qpair failed and we were unable to recover it. 00:30:55.123 [2024-11-28 08:29:52.368566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.124 [2024-11-28 08:29:52.368596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.124 qpair failed and we were unable to recover it. 00:30:55.124 [2024-11-28 08:29:52.368805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.124 [2024-11-28 08:29:52.368833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.124 qpair failed and we were unable to recover it. 00:30:55.124 [2024-11-28 08:29:52.369079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.124 [2024-11-28 08:29:52.369110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.124 qpair failed and we were unable to recover it. 00:30:55.124 [2024-11-28 08:29:52.369557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.124 [2024-11-28 08:29:52.369588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.124 qpair failed and we were unable to recover it. 00:30:55.124 [2024-11-28 08:29:52.369830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.124 [2024-11-28 08:29:52.369862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.124 qpair failed and we were unable to recover it. 00:30:55.124 [2024-11-28 08:29:52.370093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.124 [2024-11-28 08:29:52.370122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.124 qpair failed and we were unable to recover it. 00:30:55.124 [2024-11-28 08:29:52.370361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.124 [2024-11-28 08:29:52.370392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.124 qpair failed and we were unable to recover it. 00:30:55.124 [2024-11-28 08:29:52.370786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.124 [2024-11-28 08:29:52.370815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.124 qpair failed and we were unable to recover it. 00:30:55.124 [2024-11-28 08:29:52.371059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.124 [2024-11-28 08:29:52.371089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.124 qpair failed and we were unable to recover it. 00:30:55.124 [2024-11-28 08:29:52.371461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.124 [2024-11-28 08:29:52.371494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.124 qpair failed and we were unable to recover it. 00:30:55.124 [2024-11-28 08:29:52.371875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.124 [2024-11-28 08:29:52.371905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.124 qpair failed and we were unable to recover it. 00:30:55.124 [2024-11-28 08:29:52.372248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.124 [2024-11-28 08:29:52.372278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.124 qpair failed and we were unable to recover it. 00:30:55.403 [2024-11-28 08:29:52.372544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.403 [2024-11-28 08:29:52.372579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.403 qpair failed and we were unable to recover it. 00:30:55.403 [2024-11-28 08:29:52.372949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.403 [2024-11-28 08:29:52.372982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.403 qpair failed and we were unable to recover it. 00:30:55.403 [2024-11-28 08:29:52.373324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.403 [2024-11-28 08:29:52.373354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.403 qpair failed and we were unable to recover it. 00:30:55.403 [2024-11-28 08:29:52.373721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.403 [2024-11-28 08:29:52.373767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.403 qpair failed and we were unable to recover it. 00:30:55.403 [2024-11-28 08:29:52.374024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.403 [2024-11-28 08:29:52.374054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.403 qpair failed and we were unable to recover it. 00:30:55.403 [2024-11-28 08:29:52.374434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.403 [2024-11-28 08:29:52.374464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.403 qpair failed and we were unable to recover it. 00:30:55.403 [2024-11-28 08:29:52.374810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.403 [2024-11-28 08:29:52.374839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.403 qpair failed and we were unable to recover it. 00:30:55.403 [2024-11-28 08:29:52.375190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.403 [2024-11-28 08:29:52.375223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.403 qpair failed and we were unable to recover it. 00:30:55.403 [2024-11-28 08:29:52.375566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.403 [2024-11-28 08:29:52.375596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.403 qpair failed and we were unable to recover it. 00:30:55.403 [2024-11-28 08:29:52.375983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.403 [2024-11-28 08:29:52.376012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.403 qpair failed and we were unable to recover it. 00:30:55.403 [2024-11-28 08:29:52.376362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.403 [2024-11-28 08:29:52.376396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.403 qpair failed and we were unable to recover it. 00:30:55.403 [2024-11-28 08:29:52.376749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.403 [2024-11-28 08:29:52.376778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.403 qpair failed and we were unable to recover it. 00:30:55.403 [2024-11-28 08:29:52.377137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.403 [2024-11-28 08:29:52.377186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.403 qpair failed and we were unable to recover it. 00:30:55.403 [2024-11-28 08:29:52.377547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.403 [2024-11-28 08:29:52.377577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.403 qpair failed and we were unable to recover it. 00:30:55.403 [2024-11-28 08:29:52.377920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.403 [2024-11-28 08:29:52.377949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.403 qpair failed and we were unable to recover it. 00:30:55.403 [2024-11-28 08:29:52.378259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.403 [2024-11-28 08:29:52.378290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.403 qpair failed and we were unable to recover it. 00:30:55.403 [2024-11-28 08:29:52.378677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.403 [2024-11-28 08:29:52.378706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.403 qpair failed and we were unable to recover it. 00:30:55.403 [2024-11-28 08:29:52.379032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.403 [2024-11-28 08:29:52.379062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.403 qpair failed and we were unable to recover it. 00:30:55.403 [2024-11-28 08:29:52.379390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.379422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.379670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.379700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.379981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.380010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.380255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.380286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.380649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.380679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.381028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.381058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.381397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.381430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.381791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.381820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.382082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.382110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.382350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.382380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.382736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.382765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.383124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.383154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.383414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.383449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.383729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.383759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.384091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.384121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.384502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.384532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.384880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.384908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.385287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.385320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.385543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.385572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.385812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.385842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.386232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.386263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.386644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.386674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.387031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.387061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.387425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.387457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.387823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.387852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.388093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.388126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.388489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.388523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.388763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.388793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.389185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.389217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.389569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.389600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.389962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.389992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.390364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.390397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.390601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.390632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.390980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.391010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.391379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.391411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.391780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.391810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.392181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.392212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.404 [2024-11-28 08:29:52.392583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.404 [2024-11-28 08:29:52.392615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.404 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.392845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.392875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.393228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.393265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.393626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.393657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.394012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.394042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.394389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.394421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.394774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.394805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.395177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.395208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.395559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.395590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.395946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.395976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.396219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.396250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.396608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.396638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.397036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.397067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.397503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.397535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.397930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.397962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.398213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.398244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.398601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.398633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.399000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.399032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.399259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.399291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.399672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.399702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.400053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.400083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.400458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.400490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.400847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.400876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.401234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.401265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.401640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.401669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.402010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.402038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.402383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.402414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.402768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.402798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.403025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.403054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.403432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.403462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.403824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.403855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.404219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.404249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.404621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.404650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.405017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.405046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.405413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.405443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.405809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.405839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.406097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.406125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.406426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.406458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.406858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.405 [2024-11-28 08:29:52.406887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.405 qpair failed and we were unable to recover it. 00:30:55.405 [2024-11-28 08:29:52.407260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.407291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.407649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.407677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.408033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.408063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.408327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.408357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.408752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.408782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.409127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.409157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.409512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.409541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.409907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.409935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.410189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.410221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.410440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.410469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.410746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.410776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.411120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.411151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.411527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.411556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.411923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.411953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.412313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.412343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.412751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.412780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.413196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.413228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.413581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.413611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.413857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.413885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.414087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.414118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.414527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.414558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.414993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.415023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.415395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.415424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.415798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.415829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.416199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.416229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.416460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.416489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.416887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.416918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.417284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.417314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.417709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.417738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.417989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.418021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.418286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.418317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.418683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.418719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.419096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.419125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.419499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.419531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.419959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.419988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.420238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.420268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.420631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.420660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.421052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.421081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.406 [2024-11-28 08:29:52.421452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.406 [2024-11-28 08:29:52.421482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.406 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.421881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.421910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.422265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.422295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.422658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.422687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.423060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.423088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.423454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.423483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.423861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.423890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.424256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.424288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.424665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.424693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.425057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.425087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.425302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.425333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.425716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.425753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.426120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.426150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.426535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.426566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.426810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.426839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.427098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.427128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.427520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.427551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.427841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.427871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.428231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.428262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.428642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.428671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.429028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.429063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.429398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.429429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.429798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.429826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.430191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.430221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.430582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.430611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.430967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.430997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.431260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.431289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.431664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.431692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.432057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.432085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.432457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.432487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.432872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.432901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.433260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.433292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.433635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.433664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.433947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.433975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.434330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.434360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.434670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.434700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.435064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.435095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.435440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.435471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.435836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.407 [2024-11-28 08:29:52.435865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.407 qpair failed and we were unable to recover it. 00:30:55.407 [2024-11-28 08:29:52.436241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.408 [2024-11-28 08:29:52.436271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.408 qpair failed and we were unable to recover it. 00:30:55.408 [2024-11-28 08:29:52.436634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.408 [2024-11-28 08:29:52.436664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.408 qpair failed and we were unable to recover it. 00:30:55.408 [2024-11-28 08:29:52.437034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.408 [2024-11-28 08:29:52.437062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.408 qpair failed and we were unable to recover it. 00:30:55.408 [2024-11-28 08:29:52.437407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.408 [2024-11-28 08:29:52.437439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.408 qpair failed and we were unable to recover it. 00:30:55.408 [2024-11-28 08:29:52.437801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.408 [2024-11-28 08:29:52.437830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.408 qpair failed and we were unable to recover it. 00:30:55.408 [2024-11-28 08:29:52.438195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.408 [2024-11-28 08:29:52.438225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.408 qpair failed and we were unable to recover it. 00:30:55.408 [2024-11-28 08:29:52.438590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.408 [2024-11-28 08:29:52.438618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.408 qpair failed and we were unable to recover it. 00:30:55.408 [2024-11-28 08:29:52.438968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.408 [2024-11-28 08:29:52.438996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.408 qpair failed and we were unable to recover it. 00:30:55.408 [2024-11-28 08:29:52.439257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.408 [2024-11-28 08:29:52.439286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.408 qpair failed and we were unable to recover it. 00:30:55.408 [2024-11-28 08:29:52.439656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.408 [2024-11-28 08:29:52.439687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.408 qpair failed and we were unable to recover it. 00:30:55.408 [2024-11-28 08:29:52.440046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.408 [2024-11-28 08:29:52.440075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.408 qpair failed and we were unable to recover it. 00:30:55.408 [2024-11-28 08:29:52.440443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.408 [2024-11-28 08:29:52.440472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.408 qpair failed and we were unable to recover it. 00:30:55.408 [2024-11-28 08:29:52.440825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.408 [2024-11-28 08:29:52.440854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.408 qpair failed and we were unable to recover it. 00:30:55.408 [2024-11-28 08:29:52.441157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.408 [2024-11-28 08:29:52.441193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.408 qpair failed and we were unable to recover it. 00:30:55.408 [2024-11-28 08:29:52.441613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.408 [2024-11-28 08:29:52.441641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.408 qpair failed and we were unable to recover it. 00:30:55.408 [2024-11-28 08:29:52.441866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.408 [2024-11-28 08:29:52.441896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.408 qpair failed and we were unable to recover it. 00:30:55.408 [2024-11-28 08:29:52.442273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.408 [2024-11-28 08:29:52.442303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.408 qpair failed and we were unable to recover it. 00:30:55.408 [2024-11-28 08:29:52.442674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.408 [2024-11-28 08:29:52.442702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.408 qpair failed and we were unable to recover it. 00:30:55.408 [2024-11-28 08:29:52.443065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.408 [2024-11-28 08:29:52.443094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.408 qpair failed and we were unable to recover it. 00:30:55.408 [2024-11-28 08:29:52.443458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.408 [2024-11-28 08:29:52.443488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.408 qpair failed and we were unable to recover it. 00:30:55.408 [2024-11-28 08:29:52.443787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.408 [2024-11-28 08:29:52.443816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.408 qpair failed and we were unable to recover it. 00:30:55.408 [2024-11-28 08:29:52.444058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.408 [2024-11-28 08:29:52.444087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.408 qpair failed and we were unable to recover it. 00:30:55.408 [2024-11-28 08:29:52.444336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.408 [2024-11-28 08:29:52.444367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.408 qpair failed and we were unable to recover it. 00:30:55.408 [2024-11-28 08:29:52.444732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.408 [2024-11-28 08:29:52.444761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.408 qpair failed and we were unable to recover it. 00:30:55.408 [2024-11-28 08:29:52.445121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.408 [2024-11-28 08:29:52.445150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.408 qpair failed and we were unable to recover it. 00:30:55.408 [2024-11-28 08:29:52.445495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.408 [2024-11-28 08:29:52.445524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.408 qpair failed and we were unable to recover it. 00:30:55.408 [2024-11-28 08:29:52.445891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.408 [2024-11-28 08:29:52.445921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.408 qpair failed and we were unable to recover it. 00:30:55.408 [2024-11-28 08:29:52.446369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.408 [2024-11-28 08:29:52.446400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.408 qpair failed and we were unable to recover it. 00:30:55.408 [2024-11-28 08:29:52.446777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.408 [2024-11-28 08:29:52.446807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.408 qpair failed and we were unable to recover it. 00:30:55.408 [2024-11-28 08:29:52.447141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.408 [2024-11-28 08:29:52.447178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.408 qpair failed and we were unable to recover it. 00:30:55.408 [2024-11-28 08:29:52.447540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.408 [2024-11-28 08:29:52.447570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.408 qpair failed and we were unable to recover it. 00:30:55.408 [2024-11-28 08:29:52.447747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.408 [2024-11-28 08:29:52.447775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.408 qpair failed and we were unable to recover it. 00:30:55.408 [2024-11-28 08:29:52.448136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.408 [2024-11-28 08:29:52.448177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.448539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.448568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.448922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.448951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.449328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.449359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.449746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.449776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.450153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.450202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.450538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.450570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.450946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.450976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.451329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.451361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.451686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.451715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.452152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.452191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.452577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.452606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.452979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.453009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.453253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.453284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.453691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.453722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.454073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.454103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.454446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.454475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.454852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.454888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.455245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.455275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.455678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.455707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.456071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.456099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.456463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.456493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.456853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.456882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.457316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.457346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.457712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.457742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.458067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.458095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.458403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.458433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.458773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.458803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.459188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.459218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.459555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.459582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.460006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.460038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.460444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.460475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.460830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.460859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.461236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.461268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.461634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.461662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.462038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.462067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.462428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.462460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.464423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.464495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.409 [2024-11-28 08:29:52.464893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.409 [2024-11-28 08:29:52.464926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.409 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.465301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.465334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.465698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.465728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.466086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.466117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.470217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.470284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.470574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.470612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.470997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.471042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.471416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.471450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.471816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.471849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.472202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.472233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.472621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.472651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.473022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.473053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.473308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.473341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.473605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.473634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.474004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.474033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.474377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.474407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.474765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.474794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.475132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.475175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.475534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.475564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.475927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.475956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.476375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.476405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.476758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.476788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.477199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.477230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.477506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.477537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.477789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.477818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.478178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.478211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.478551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.478585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.478934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.478964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.479341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.479372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.479735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.479764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.480125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.480154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.480608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.480637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.480976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.481006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.481314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.481353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.481720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.481749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.482118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.482146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.482540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.482569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.482986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.483014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.483275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.410 [2024-11-28 08:29:52.483304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.410 qpair failed and we were unable to recover it. 00:30:55.410 [2024-11-28 08:29:52.483678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.483706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.484077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.484104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.484481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.484510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.484778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.484805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.485182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.485211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.485560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.485588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.485971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.485999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.486343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.486374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.486806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.486834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.487205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.487237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.487609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.487638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.487987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.488015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.488267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.488299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.488684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.488715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.489068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.489098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.489357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.489387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.489717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.489747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.490109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.490138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.490541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.490574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.490950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.490979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.491368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.491401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.491633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.491663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.492029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.492060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.492316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.492352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.492729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.492759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.493012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.493042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.493389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.493419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.493778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.493808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.494065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.494095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.494473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.494505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.494739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.494769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.495129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.495168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.495557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.495588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.495951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.495981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.496352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.496383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.496746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.496776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.497025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.497055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.497423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.497455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.411 [2024-11-28 08:29:52.497830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.411 [2024-11-28 08:29:52.497859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.411 qpair failed and we were unable to recover it. 00:30:55.412 [2024-11-28 08:29:52.498213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.412 [2024-11-28 08:29:52.498244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.412 qpair failed and we were unable to recover it. 00:30:55.412 [2024-11-28 08:29:52.498516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.412 [2024-11-28 08:29:52.498546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.412 qpair failed and we were unable to recover it. 00:30:55.412 [2024-11-28 08:29:52.498922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.412 [2024-11-28 08:29:52.498951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.412 qpair failed and we were unable to recover it. 00:30:55.412 [2024-11-28 08:29:52.499304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.412 [2024-11-28 08:29:52.499335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.412 qpair failed and we were unable to recover it. 00:30:55.412 [2024-11-28 08:29:52.499718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.412 [2024-11-28 08:29:52.499748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.412 qpair failed and we were unable to recover it. 00:30:55.412 [2024-11-28 08:29:52.500104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.412 [2024-11-28 08:29:52.500132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.412 qpair failed and we were unable to recover it. 00:30:55.412 [2024-11-28 08:29:52.500522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.412 [2024-11-28 08:29:52.500553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.412 qpair failed and we were unable to recover it. 00:30:55.412 [2024-11-28 08:29:52.500894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.412 [2024-11-28 08:29:52.500924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.412 qpair failed and we were unable to recover it. 00:30:55.412 [2024-11-28 08:29:52.501291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.412 [2024-11-28 08:29:52.501321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.412 qpair failed and we were unable to recover it. 00:30:55.412 [2024-11-28 08:29:52.501693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.412 [2024-11-28 08:29:52.501723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.412 qpair failed and we were unable to recover it. 00:30:55.412 [2024-11-28 08:29:52.502088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.412 [2024-11-28 08:29:52.502116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.412 qpair failed and we were unable to recover it. 00:30:55.412 [2024-11-28 08:29:52.502519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.412 [2024-11-28 08:29:52.502550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.412 qpair failed and we were unable to recover it. 00:30:55.412 [2024-11-28 08:29:52.502916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.412 [2024-11-28 08:29:52.502944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.412 qpair failed and we were unable to recover it. 00:30:55.412 [2024-11-28 08:29:52.503313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.412 [2024-11-28 08:29:52.503343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.412 qpair failed and we were unable to recover it. 00:30:55.412 [2024-11-28 08:29:52.503606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.412 [2024-11-28 08:29:52.503635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.412 qpair failed and we were unable to recover it. 00:30:55.412 [2024-11-28 08:29:52.503883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.412 [2024-11-28 08:29:52.503915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.412 qpair failed and we were unable to recover it. 00:30:55.412 [2024-11-28 08:29:52.504282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.412 [2024-11-28 08:29:52.504313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.412 qpair failed and we were unable to recover it. 00:30:55.412 [2024-11-28 08:29:52.504693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.412 [2024-11-28 08:29:52.504722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.412 qpair failed and we were unable to recover it. 00:30:55.412 [2024-11-28 08:29:52.505090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.412 [2024-11-28 08:29:52.505119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.412 qpair failed and we were unable to recover it. 00:30:55.412 [2024-11-28 08:29:52.505377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.412 [2024-11-28 08:29:52.505412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.412 qpair failed and we were unable to recover it. 00:30:55.412 [2024-11-28 08:29:52.505798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.412 [2024-11-28 08:29:52.505828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.412 qpair failed and we were unable to recover it. 00:30:55.412 [2024-11-28 08:29:52.506202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.412 [2024-11-28 08:29:52.506232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.412 qpair failed and we were unable to recover it. 00:30:55.412 [2024-11-28 08:29:52.506598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.412 [2024-11-28 08:29:52.506627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.412 qpair failed and we were unable to recover it. 00:30:55.412 [2024-11-28 08:29:52.506997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.412 [2024-11-28 08:29:52.507035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.412 qpair failed and we were unable to recover it. 00:30:55.412 [2024-11-28 08:29:52.507404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.412 [2024-11-28 08:29:52.507434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.412 qpair failed and we were unable to recover it. 00:30:55.412 [2024-11-28 08:29:52.507781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.412 [2024-11-28 08:29:52.507811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.412 qpair failed and we were unable to recover it. 00:30:55.412 [2024-11-28 08:29:52.508060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.412 [2024-11-28 08:29:52.508088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.412 qpair failed and we were unable to recover it. 00:30:55.412 [2024-11-28 08:29:52.508446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.412 [2024-11-28 08:29:52.508476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.412 qpair failed and we were unable to recover it. 00:30:55.412 [2024-11-28 08:29:52.508859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.412 [2024-11-28 08:29:52.508889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.412 qpair failed and we were unable to recover it. 00:30:55.412 [2024-11-28 08:29:52.509126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.412 [2024-11-28 08:29:52.509156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.412 qpair failed and we were unable to recover it. 00:30:55.412 [2024-11-28 08:29:52.509537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.412 [2024-11-28 08:29:52.509566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.412 qpair failed and we were unable to recover it. 00:30:55.412 [2024-11-28 08:29:52.509927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.412 [2024-11-28 08:29:52.509957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.412 qpair failed and we were unable to recover it. 00:30:55.412 [2024-11-28 08:29:52.510336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.412 [2024-11-28 08:29:52.510367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.412 qpair failed and we were unable to recover it. 00:30:55.412 [2024-11-28 08:29:52.510730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.510760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.511018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.511050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.511182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.511212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.511500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.511529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.511931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.511960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.512410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.512442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.512796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.512824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.513196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.513226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.513512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.513541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.513915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.513945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.514184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.514214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.514578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.514607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.514849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.514877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.515106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.515136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.515523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.515552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.515922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.515950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.516310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.516341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.516718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.516754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.516996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.517024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.517306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.517336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.517552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.517580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.517974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.518003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.518273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.518306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.518663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.518693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.519058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.519088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.519454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.519484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.519843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.519872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.520246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.520276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.520630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.520660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.521073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.521103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.521449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.521478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.521841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.521871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.522224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.522256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.522595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.522623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.522987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.523017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.523393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.523423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.523791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.523820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.524191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.524220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.413 qpair failed and we were unable to recover it. 00:30:55.413 [2024-11-28 08:29:52.524571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.413 [2024-11-28 08:29:52.524599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.524965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.524994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.525370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.525401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.525763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.525791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.526174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.526204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.526577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.526606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.526950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.526978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.527234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.527265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.527631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.527660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.528035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.528064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.528412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.528442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.528700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.528728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.529085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.529114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.529368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.529397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.529740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.529771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.530122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.530152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.530527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.530559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.530909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.530939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.531314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.531345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.531688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.531718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.532090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.532120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.532505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.532536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.532903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.532931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.533284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.533315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.533695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.533723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.534077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.534105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.534517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.534546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.534910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.534939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.535415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.535444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.535810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.535838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.536193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.536223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.536563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.536593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.536957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.536985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.537350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.537381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.537640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.537672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.538034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.538062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.538418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.538449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.538820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.538852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.414 [2024-11-28 08:29:52.539237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.414 [2024-11-28 08:29:52.539268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.414 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.539604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.539635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.540009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.540038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.540391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.540423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.540664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.540693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.541042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.541072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.541411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.541442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.541798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.541829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.542198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.542227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.542672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.542713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.543041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.543071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.543397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.543427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.543790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.543818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.544080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.544108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.544491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.544521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.544767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.544796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.545149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.545197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.545590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.545619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.545965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.545994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.546350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.546380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.546708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.546737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.547066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.547094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.547441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.547472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.547717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.547746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.548096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.548125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.548514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.548545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.548906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.548936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.549296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.549326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.549701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.549730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.550088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.550116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.550510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.550540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.550889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.550919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.551274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.551304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.551562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.551590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.551948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.551978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.552340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.552370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.552723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.552760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.553000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.553029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.553388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.415 [2024-11-28 08:29:52.553419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.415 qpair failed and we were unable to recover it. 00:30:55.415 [2024-11-28 08:29:52.553753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.553782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.554126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.554154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.554414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.554443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.554706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.554734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.555100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.555128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.555508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.555538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.555969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.555998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.556344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.556374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.556736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.556764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.557206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.557236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.557597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.557625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.558000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.558029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.558401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.558429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.558783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.558811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.559179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.559209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.559562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.559590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.559837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.559869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.560200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.560230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.560604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.560633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.560991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.561019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.561335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.561365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.561817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.561845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.562178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.562210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.562575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.562604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.562969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.563005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.563369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.563399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.563769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.563798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.564170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.564201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.564585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.564613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.564979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.565008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.565385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.565417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.565753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.565782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.566143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.566181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.566544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.566574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.566953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.566981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.567342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.567371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.567741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.567769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.568195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.568224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.416 [2024-11-28 08:29:52.568598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.416 [2024-11-28 08:29:52.568628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.416 qpair failed and we were unable to recover it. 00:30:55.417 [2024-11-28 08:29:52.568907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.417 [2024-11-28 08:29:52.568936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.417 qpair failed and we were unable to recover it. 00:30:55.417 [2024-11-28 08:29:52.569285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.417 [2024-11-28 08:29:52.569315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.417 qpair failed and we were unable to recover it. 00:30:55.417 [2024-11-28 08:29:52.569694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.417 [2024-11-28 08:29:52.569722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.417 qpair failed and we were unable to recover it. 00:30:55.417 [2024-11-28 08:29:52.570092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.417 [2024-11-28 08:29:52.570122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.417 qpair failed and we were unable to recover it. 00:30:55.417 [2024-11-28 08:29:52.570477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.417 [2024-11-28 08:29:52.570506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.417 qpair failed and we were unable to recover it. 00:30:55.417 [2024-11-28 08:29:52.570869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.417 [2024-11-28 08:29:52.570898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.417 qpair failed and we were unable to recover it. 00:30:55.417 [2024-11-28 08:29:52.571260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.417 [2024-11-28 08:29:52.571289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.417 qpair failed and we were unable to recover it. 00:30:55.417 [2024-11-28 08:29:52.571646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.417 [2024-11-28 08:29:52.571675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.417 qpair failed and we were unable to recover it. 00:30:55.417 [2024-11-28 08:29:52.572038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.417 [2024-11-28 08:29:52.572067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.417 qpair failed and we were unable to recover it. 00:30:55.417 [2024-11-28 08:29:52.572408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.417 [2024-11-28 08:29:52.572438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.417 qpair failed and we were unable to recover it. 00:30:55.417 [2024-11-28 08:29:52.572799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.417 [2024-11-28 08:29:52.572830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.417 qpair failed and we were unable to recover it. 00:30:55.417 [2024-11-28 08:29:52.573185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.417 [2024-11-28 08:29:52.573217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.417 qpair failed and we were unable to recover it. 00:30:55.417 [2024-11-28 08:29:52.573586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.417 [2024-11-28 08:29:52.573615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.417 qpair failed and we were unable to recover it. 00:30:55.417 [2024-11-28 08:29:52.573995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.417 [2024-11-28 08:29:52.574026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.417 qpair failed and we were unable to recover it. 00:30:55.417 [2024-11-28 08:29:52.574408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.417 [2024-11-28 08:29:52.574439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.417 qpair failed and we were unable to recover it. 00:30:55.417 [2024-11-28 08:29:52.574642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.417 [2024-11-28 08:29:52.574674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.417 qpair failed and we were unable to recover it. 00:30:55.417 [2024-11-28 08:29:52.575021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.417 [2024-11-28 08:29:52.575051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.417 qpair failed and we were unable to recover it. 00:30:55.417 [2024-11-28 08:29:52.575383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.417 [2024-11-28 08:29:52.575415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.417 qpair failed and we were unable to recover it. 00:30:55.417 [2024-11-28 08:29:52.575760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.417 [2024-11-28 08:29:52.575788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.417 qpair failed and we were unable to recover it. 00:30:55.417 [2024-11-28 08:29:52.576170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.417 [2024-11-28 08:29:52.576200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.417 qpair failed and we were unable to recover it. 00:30:55.417 [2024-11-28 08:29:52.576556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.417 [2024-11-28 08:29:52.576585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.417 qpair failed and we were unable to recover it. 00:30:55.417 [2024-11-28 08:29:52.576951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.417 [2024-11-28 08:29:52.576981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.417 qpair failed and we were unable to recover it. 00:30:55.417 [2024-11-28 08:29:52.577298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.417 [2024-11-28 08:29:52.577329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.417 qpair failed and we were unable to recover it. 00:30:55.417 [2024-11-28 08:29:52.577698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.417 [2024-11-28 08:29:52.577728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.417 qpair failed and we were unable to recover it. 00:30:55.417 [2024-11-28 08:29:52.578100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.417 [2024-11-28 08:29:52.578129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.417 qpair failed and we were unable to recover it. 00:30:55.417 [2024-11-28 08:29:52.578494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.417 [2024-11-28 08:29:52.578523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.417 qpair failed and we were unable to recover it. 00:30:55.417 [2024-11-28 08:29:52.578887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.417 [2024-11-28 08:29:52.578917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.417 qpair failed and we were unable to recover it. 00:30:55.417 [2024-11-28 08:29:52.579278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.417 [2024-11-28 08:29:52.579309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.417 qpair failed and we were unable to recover it. 00:30:55.417 [2024-11-28 08:29:52.579686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.417 [2024-11-28 08:29:52.579716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.417 qpair failed and we were unable to recover it. 00:30:55.417 [2024-11-28 08:29:52.580026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.417 [2024-11-28 08:29:52.580055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.417 qpair failed and we were unable to recover it. 00:30:55.417 [2024-11-28 08:29:52.580402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.417 [2024-11-28 08:29:52.580433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.417 qpair failed and we were unable to recover it. 00:30:55.417 [2024-11-28 08:29:52.580816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.417 [2024-11-28 08:29:52.580845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.417 qpair failed and we were unable to recover it. 00:30:55.417 [2024-11-28 08:29:52.581108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.417 [2024-11-28 08:29:52.581139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.417 qpair failed and we were unable to recover it. 00:30:55.418 [2024-11-28 08:29:52.581540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.418 [2024-11-28 08:29:52.581570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.418 qpair failed and we were unable to recover it. 00:30:55.418 [2024-11-28 08:29:52.581915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.418 [2024-11-28 08:29:52.581944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.418 qpair failed and we were unable to recover it. 00:30:55.418 [2024-11-28 08:29:52.582301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.418 [2024-11-28 08:29:52.582331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.418 qpair failed and we were unable to recover it. 00:30:55.418 [2024-11-28 08:29:52.582696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.418 [2024-11-28 08:29:52.582724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.418 qpair failed and we were unable to recover it. 00:30:55.418 [2024-11-28 08:29:52.583056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.418 [2024-11-28 08:29:52.583084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.418 qpair failed and we were unable to recover it. 00:30:55.418 [2024-11-28 08:29:52.583432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.418 [2024-11-28 08:29:52.583461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.418 qpair failed and we were unable to recover it. 00:30:55.418 [2024-11-28 08:29:52.583827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.418 [2024-11-28 08:29:52.583857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.418 qpair failed and we were unable to recover it. 00:30:55.418 [2024-11-28 08:29:52.584214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.418 [2024-11-28 08:29:52.584244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.418 qpair failed and we were unable to recover it. 00:30:55.418 [2024-11-28 08:29:52.584620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.418 [2024-11-28 08:29:52.584648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.418 qpair failed and we were unable to recover it. 00:30:55.418 [2024-11-28 08:29:52.585018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.418 [2024-11-28 08:29:52.585046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.418 qpair failed and we were unable to recover it. 00:30:55.418 [2024-11-28 08:29:52.585435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.418 [2024-11-28 08:29:52.585465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.418 qpair failed and we were unable to recover it. 00:30:55.418 [2024-11-28 08:29:52.585824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.418 [2024-11-28 08:29:52.585852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.418 qpair failed and we were unable to recover it. 00:30:55.418 [2024-11-28 08:29:52.586113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.418 [2024-11-28 08:29:52.586142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.418 qpair failed and we were unable to recover it. 00:30:55.418 [2024-11-28 08:29:52.586503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.418 [2024-11-28 08:29:52.586533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.418 qpair failed and we were unable to recover it. 00:30:55.418 [2024-11-28 08:29:52.586902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.418 [2024-11-28 08:29:52.586931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.418 qpair failed and we were unable to recover it. 00:30:55.418 [2024-11-28 08:29:52.587380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.418 [2024-11-28 08:29:52.587411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.418 qpair failed and we were unable to recover it. 00:30:55.418 [2024-11-28 08:29:52.587754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.418 [2024-11-28 08:29:52.587784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.418 qpair failed and we were unable to recover it. 00:30:55.418 [2024-11-28 08:29:52.588139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.418 [2024-11-28 08:29:52.588175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.418 qpair failed and we were unable to recover it. 00:30:55.418 [2024-11-28 08:29:52.588528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.418 [2024-11-28 08:29:52.588558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.418 qpair failed and we were unable to recover it. 00:30:55.418 [2024-11-28 08:29:52.588939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.418 [2024-11-28 08:29:52.588967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.418 qpair failed and we were unable to recover it. 00:30:55.418 [2024-11-28 08:29:52.589237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.418 [2024-11-28 08:29:52.589273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.418 qpair failed and we were unable to recover it. 00:30:55.418 [2024-11-28 08:29:52.589522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.418 [2024-11-28 08:29:52.589554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.418 qpair failed and we were unable to recover it. 00:30:55.418 [2024-11-28 08:29:52.589849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.418 [2024-11-28 08:29:52.589878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.418 qpair failed and we were unable to recover it. 00:30:55.418 [2024-11-28 08:29:52.590257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.418 [2024-11-28 08:29:52.590288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.418 qpair failed and we were unable to recover it. 00:30:55.418 [2024-11-28 08:29:52.590669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.418 [2024-11-28 08:29:52.590697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.418 qpair failed and we were unable to recover it. 00:30:55.418 [2024-11-28 08:29:52.591067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.418 [2024-11-28 08:29:52.591096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.418 qpair failed and we were unable to recover it. 00:30:55.418 [2024-11-28 08:29:52.591465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.418 [2024-11-28 08:29:52.591496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.418 qpair failed and we were unable to recover it. 00:30:55.418 [2024-11-28 08:29:52.591920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.418 [2024-11-28 08:29:52.591948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.418 qpair failed and we were unable to recover it. 00:30:55.418 [2024-11-28 08:29:52.592324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.418 [2024-11-28 08:29:52.592354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.418 qpair failed and we were unable to recover it. 00:30:55.418 [2024-11-28 08:29:52.592713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.418 [2024-11-28 08:29:52.592742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.418 qpair failed and we were unable to recover it. 00:30:55.418 [2024-11-28 08:29:52.593109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.418 [2024-11-28 08:29:52.593139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.418 qpair failed and we were unable to recover it. 00:30:55.418 [2024-11-28 08:29:52.593534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.419 [2024-11-28 08:29:52.593565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.419 qpair failed and we were unable to recover it. 00:30:55.419 [2024-11-28 08:29:52.593931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.419 [2024-11-28 08:29:52.593961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.419 qpair failed and we were unable to recover it. 00:30:55.419 [2024-11-28 08:29:52.594324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.419 [2024-11-28 08:29:52.594354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.419 qpair failed and we were unable to recover it. 00:30:55.419 [2024-11-28 08:29:52.594721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.419 [2024-11-28 08:29:52.594749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.419 qpair failed and we were unable to recover it. 00:30:55.419 [2024-11-28 08:29:52.595113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.419 [2024-11-28 08:29:52.595141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.419 qpair failed and we were unable to recover it. 00:30:55.419 [2024-11-28 08:29:52.595588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.419 [2024-11-28 08:29:52.595617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.419 qpair failed and we were unable to recover it. 00:30:55.419 [2024-11-28 08:29:52.596038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.419 [2024-11-28 08:29:52.596066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.419 qpair failed and we were unable to recover it. 00:30:55.419 [2024-11-28 08:29:52.596416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.419 [2024-11-28 08:29:52.596446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.419 qpair failed and we were unable to recover it. 00:30:55.419 [2024-11-28 08:29:52.596698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.419 [2024-11-28 08:29:52.596730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.419 qpair failed and we were unable to recover it. 00:30:55.419 [2024-11-28 08:29:52.597102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.419 [2024-11-28 08:29:52.597130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.419 qpair failed and we were unable to recover it. 00:30:55.419 [2024-11-28 08:29:52.597404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.419 [2024-11-28 08:29:52.597433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.419 qpair failed and we were unable to recover it. 00:30:55.419 [2024-11-28 08:29:52.597797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.419 [2024-11-28 08:29:52.597826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.419 qpair failed and we were unable to recover it. 00:30:55.419 [2024-11-28 08:29:52.598187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.419 [2024-11-28 08:29:52.598216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.419 qpair failed and we were unable to recover it. 00:30:55.419 [2024-11-28 08:29:52.598514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.419 [2024-11-28 08:29:52.598543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.419 qpair failed and we were unable to recover it. 00:30:55.419 [2024-11-28 08:29:52.598913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.419 [2024-11-28 08:29:52.598943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.419 qpair failed and we were unable to recover it. 00:30:55.419 [2024-11-28 08:29:52.599315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.419 [2024-11-28 08:29:52.599344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.419 qpair failed and we were unable to recover it. 00:30:55.419 [2024-11-28 08:29:52.599694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.419 [2024-11-28 08:29:52.599730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.419 qpair failed and we were unable to recover it. 00:30:55.419 [2024-11-28 08:29:52.600150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.419 [2024-11-28 08:29:52.600189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.419 qpair failed and we were unable to recover it. 00:30:55.419 [2024-11-28 08:29:52.600601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.419 [2024-11-28 08:29:52.600631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.419 qpair failed and we were unable to recover it. 00:30:55.419 [2024-11-28 08:29:52.601010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.419 [2024-11-28 08:29:52.601039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.419 qpair failed and we were unable to recover it. 00:30:55.419 [2024-11-28 08:29:52.601302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.419 [2024-11-28 08:29:52.601333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.419 qpair failed and we were unable to recover it. 00:30:55.419 [2024-11-28 08:29:52.601707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.419 [2024-11-28 08:29:52.601735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.419 qpair failed and we were unable to recover it. 00:30:55.419 [2024-11-28 08:29:52.601995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.419 [2024-11-28 08:29:52.602027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.419 qpair failed and we were unable to recover it. 00:30:55.419 [2024-11-28 08:29:52.602406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.419 [2024-11-28 08:29:52.602436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.419 qpair failed and we were unable to recover it. 00:30:55.419 [2024-11-28 08:29:52.602794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.419 [2024-11-28 08:29:52.602825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.419 qpair failed and we were unable to recover it. 00:30:55.419 [2024-11-28 08:29:52.603032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.419 [2024-11-28 08:29:52.603062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.419 qpair failed and we were unable to recover it. 00:30:55.419 [2024-11-28 08:29:52.603302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.419 [2024-11-28 08:29:52.603332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.419 qpair failed and we were unable to recover it. 00:30:55.419 [2024-11-28 08:29:52.603689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.419 [2024-11-28 08:29:52.603719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.419 qpair failed and we were unable to recover it. 00:30:55.419 [2024-11-28 08:29:52.604088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.419 [2024-11-28 08:29:52.604118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.419 qpair failed and we were unable to recover it. 00:30:55.419 [2024-11-28 08:29:52.604482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.419 [2024-11-28 08:29:52.604513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.419 qpair failed and we were unable to recover it. 00:30:55.420 [2024-11-28 08:29:52.604881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.420 [2024-11-28 08:29:52.604911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.420 qpair failed and we were unable to recover it. 00:30:55.420 [2024-11-28 08:29:52.605190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.420 [2024-11-28 08:29:52.605221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.420 qpair failed and we were unable to recover it. 00:30:55.420 [2024-11-28 08:29:52.605623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.420 [2024-11-28 08:29:52.605652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.420 qpair failed and we were unable to recover it. 00:30:55.420 [2024-11-28 08:29:52.606035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.420 [2024-11-28 08:29:52.606064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.420 qpair failed and we were unable to recover it. 00:30:55.420 [2024-11-28 08:29:52.606437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.420 [2024-11-28 08:29:52.606470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.420 qpair failed and we were unable to recover it. 00:30:55.420 [2024-11-28 08:29:52.606818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.420 [2024-11-28 08:29:52.606846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.420 qpair failed and we were unable to recover it. 00:30:55.420 [2024-11-28 08:29:52.607084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.420 [2024-11-28 08:29:52.607113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.420 qpair failed and we were unable to recover it. 00:30:55.420 [2024-11-28 08:29:52.607450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.420 [2024-11-28 08:29:52.607480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.420 qpair failed and we were unable to recover it. 00:30:55.420 [2024-11-28 08:29:52.607825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.420 [2024-11-28 08:29:52.607852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.420 qpair failed and we were unable to recover it. 00:30:55.420 [2024-11-28 08:29:52.608218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.420 [2024-11-28 08:29:52.608248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.420 qpair failed and we were unable to recover it. 00:30:55.420 [2024-11-28 08:29:52.608567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.420 [2024-11-28 08:29:52.608595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.420 qpair failed and we were unable to recover it. 00:30:55.420 [2024-11-28 08:29:52.608937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.420 [2024-11-28 08:29:52.608965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.420 qpair failed and we were unable to recover it. 00:30:55.420 [2024-11-28 08:29:52.609319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.420 [2024-11-28 08:29:52.609351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.420 qpair failed and we were unable to recover it. 00:30:55.420 [2024-11-28 08:29:52.609735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.420 [2024-11-28 08:29:52.609763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.420 qpair failed and we were unable to recover it. 00:30:55.420 [2024-11-28 08:29:52.610126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.420 [2024-11-28 08:29:52.610155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.420 qpair failed and we were unable to recover it. 00:30:55.420 [2024-11-28 08:29:52.610457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.420 [2024-11-28 08:29:52.610487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.420 qpair failed and we were unable to recover it. 00:30:55.420 [2024-11-28 08:29:52.610962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.420 [2024-11-28 08:29:52.610991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.420 qpair failed and we were unable to recover it. 00:30:55.420 [2024-11-28 08:29:52.611322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.420 [2024-11-28 08:29:52.611353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.420 qpair failed and we were unable to recover it. 00:30:55.420 [2024-11-28 08:29:52.611703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.420 [2024-11-28 08:29:52.611732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.420 qpair failed and we were unable to recover it. 00:30:55.420 [2024-11-28 08:29:52.611991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.420 [2024-11-28 08:29:52.612019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.420 qpair failed and we were unable to recover it. 00:30:55.420 [2024-11-28 08:29:52.612390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.420 [2024-11-28 08:29:52.612420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.420 qpair failed and we were unable to recover it. 00:30:55.420 [2024-11-28 08:29:52.612680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.420 [2024-11-28 08:29:52.612709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.420 qpair failed and we were unable to recover it. 00:30:55.420 [2024-11-28 08:29:52.613069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.420 [2024-11-28 08:29:52.613098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.420 qpair failed and we were unable to recover it. 00:30:55.420 [2024-11-28 08:29:52.613368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.420 [2024-11-28 08:29:52.613402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.420 qpair failed and we were unable to recover it. 00:30:55.420 [2024-11-28 08:29:52.613814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.420 [2024-11-28 08:29:52.613844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.420 qpair failed and we were unable to recover it. 00:30:55.420 [2024-11-28 08:29:52.614200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.420 [2024-11-28 08:29:52.614232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.420 qpair failed and we were unable to recover it. 00:30:55.420 [2024-11-28 08:29:52.614565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.420 [2024-11-28 08:29:52.614593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.420 qpair failed and we were unable to recover it. 00:30:55.420 [2024-11-28 08:29:52.614942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.420 [2024-11-28 08:29:52.614972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.420 qpair failed and we were unable to recover it. 00:30:55.420 [2024-11-28 08:29:52.615242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.420 [2024-11-28 08:29:52.615273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.420 qpair failed and we were unable to recover it. 00:30:55.420 [2024-11-28 08:29:52.615661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.421 [2024-11-28 08:29:52.615689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.421 qpair failed and we were unable to recover it. 00:30:55.421 [2024-11-28 08:29:52.615951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.421 [2024-11-28 08:29:52.615978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.421 qpair failed and we were unable to recover it. 00:30:55.421 [2024-11-28 08:29:52.616375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.421 [2024-11-28 08:29:52.616404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.421 qpair failed and we were unable to recover it. 00:30:55.421 [2024-11-28 08:29:52.616771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.421 [2024-11-28 08:29:52.616800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.421 qpair failed and we were unable to recover it. 00:30:55.421 [2024-11-28 08:29:52.617171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.421 [2024-11-28 08:29:52.617200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.421 qpair failed and we were unable to recover it. 00:30:55.421 [2024-11-28 08:29:52.617564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.421 [2024-11-28 08:29:52.617593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.421 qpair failed and we were unable to recover it. 00:30:55.421 [2024-11-28 08:29:52.617983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.421 [2024-11-28 08:29:52.618012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.421 qpair failed and we were unable to recover it. 00:30:55.421 [2024-11-28 08:29:52.618364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.421 [2024-11-28 08:29:52.618393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.421 qpair failed and we were unable to recover it. 00:30:55.421 [2024-11-28 08:29:52.618656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.421 [2024-11-28 08:29:52.618684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.421 qpair failed and we were unable to recover it. 00:30:55.421 [2024-11-28 08:29:52.618945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.421 [2024-11-28 08:29:52.618973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.421 qpair failed and we were unable to recover it. 00:30:55.421 [2024-11-28 08:29:52.619375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.421 [2024-11-28 08:29:52.619405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.421 qpair failed and we were unable to recover it. 00:30:55.421 [2024-11-28 08:29:52.619778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.421 [2024-11-28 08:29:52.619806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.421 qpair failed and we were unable to recover it. 00:30:55.421 [2024-11-28 08:29:52.620153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.421 [2024-11-28 08:29:52.620190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.421 qpair failed and we were unable to recover it. 00:30:55.421 [2024-11-28 08:29:52.620483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.421 [2024-11-28 08:29:52.620511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.421 qpair failed and we were unable to recover it. 00:30:55.421 [2024-11-28 08:29:52.620855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.421 [2024-11-28 08:29:52.620885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.421 qpair failed and we were unable to recover it. 00:30:55.421 [2024-11-28 08:29:52.621243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.421 [2024-11-28 08:29:52.621273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.421 qpair failed and we were unable to recover it. 00:30:55.421 [2024-11-28 08:29:52.621650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.421 [2024-11-28 08:29:52.621679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.421 qpair failed and we were unable to recover it. 00:30:55.421 [2024-11-28 08:29:52.622060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.421 [2024-11-28 08:29:52.622089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.421 qpair failed and we were unable to recover it. 00:30:55.421 [2024-11-28 08:29:52.622502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.421 [2024-11-28 08:29:52.622532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.421 qpair failed and we were unable to recover it. 00:30:55.421 [2024-11-28 08:29:52.622885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.421 [2024-11-28 08:29:52.622913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.421 qpair failed and we were unable to recover it. 00:30:55.421 [2024-11-28 08:29:52.623298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.421 [2024-11-28 08:29:52.623329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.421 qpair failed and we were unable to recover it. 00:30:55.421 [2024-11-28 08:29:52.623694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.421 [2024-11-28 08:29:52.623723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.421 qpair failed and we were unable to recover it. 00:30:55.421 [2024-11-28 08:29:52.624069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.421 [2024-11-28 08:29:52.624096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.421 qpair failed and we were unable to recover it. 00:30:55.421 [2024-11-28 08:29:52.624470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.421 [2024-11-28 08:29:52.624499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.421 qpair failed and we were unable to recover it. 00:30:55.421 [2024-11-28 08:29:52.624874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.421 [2024-11-28 08:29:52.624903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.421 qpair failed and we were unable to recover it. 00:30:55.421 [2024-11-28 08:29:52.625337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.421 [2024-11-28 08:29:52.625374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.421 qpair failed and we were unable to recover it. 00:30:55.421 [2024-11-28 08:29:52.625742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.421 [2024-11-28 08:29:52.625770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.421 qpair failed and we were unable to recover it. 00:30:55.421 [2024-11-28 08:29:52.626135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.421 [2024-11-28 08:29:52.626173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.421 qpair failed and we were unable to recover it. 00:30:55.421 [2024-11-28 08:29:52.626590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.421 [2024-11-28 08:29:52.626620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.421 qpair failed and we were unable to recover it. 00:30:55.422 [2024-11-28 08:29:52.626973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.422 [2024-11-28 08:29:52.627002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.422 qpair failed and we were unable to recover it. 00:30:55.422 [2024-11-28 08:29:52.627376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.422 [2024-11-28 08:29:52.627406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.422 qpair failed and we were unable to recover it. 00:30:55.422 [2024-11-28 08:29:52.627813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.422 [2024-11-28 08:29:52.627842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.422 qpair failed and we were unable to recover it. 00:30:55.422 [2024-11-28 08:29:52.628214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.422 [2024-11-28 08:29:52.628244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.422 qpair failed and we were unable to recover it. 00:30:55.422 [2024-11-28 08:29:52.628621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.422 [2024-11-28 08:29:52.628649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.422 qpair failed and we were unable to recover it. 00:30:55.422 [2024-11-28 08:29:52.629011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.422 [2024-11-28 08:29:52.629039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.422 qpair failed and we were unable to recover it. 00:30:55.422 [2024-11-28 08:29:52.629426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.422 [2024-11-28 08:29:52.629456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.422 qpair failed and we were unable to recover it. 00:30:55.422 [2024-11-28 08:29:52.629804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.422 [2024-11-28 08:29:52.629833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.422 qpair failed and we were unable to recover it. 00:30:55.422 [2024-11-28 08:29:52.630096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.422 [2024-11-28 08:29:52.630125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.422 qpair failed and we were unable to recover it. 00:30:55.422 [2024-11-28 08:29:52.630479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.422 [2024-11-28 08:29:52.630510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.422 qpair failed and we were unable to recover it. 00:30:55.422 [2024-11-28 08:29:52.630894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.422 [2024-11-28 08:29:52.630925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.422 qpair failed and we were unable to recover it. 00:30:55.422 [2024-11-28 08:29:52.631186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.422 [2024-11-28 08:29:52.631216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.422 qpair failed and we were unable to recover it. 00:30:55.422 [2024-11-28 08:29:52.631671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.422 [2024-11-28 08:29:52.631700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.422 qpair failed and we were unable to recover it. 00:30:55.422 [2024-11-28 08:29:52.632043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.422 [2024-11-28 08:29:52.632073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.422 qpair failed and we were unable to recover it. 00:30:55.422 [2024-11-28 08:29:52.632420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.422 [2024-11-28 08:29:52.632449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.422 qpair failed and we were unable to recover it. 00:30:55.422 [2024-11-28 08:29:52.632783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.422 [2024-11-28 08:29:52.632813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.422 qpair failed and we were unable to recover it. 00:30:55.422 [2024-11-28 08:29:52.633203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.422 [2024-11-28 08:29:52.633234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.422 qpair failed and we were unable to recover it. 00:30:55.422 [2024-11-28 08:29:52.633521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.422 [2024-11-28 08:29:52.633549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.422 qpair failed and we were unable to recover it. 00:30:55.422 [2024-11-28 08:29:52.633942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.422 [2024-11-28 08:29:52.633970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.422 qpair failed and we were unable to recover it. 00:30:55.422 [2024-11-28 08:29:52.634217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.422 [2024-11-28 08:29:52.634249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.422 qpair failed and we were unable to recover it. 00:30:55.422 [2024-11-28 08:29:52.634602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.422 [2024-11-28 08:29:52.634632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.422 qpair failed and we were unable to recover it. 00:30:55.422 [2024-11-28 08:29:52.635004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.422 [2024-11-28 08:29:52.635033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.422 qpair failed and we were unable to recover it. 00:30:55.422 [2024-11-28 08:29:52.635409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.422 [2024-11-28 08:29:52.635439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.422 qpair failed and we were unable to recover it. 00:30:55.422 [2024-11-28 08:29:52.635796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.422 [2024-11-28 08:29:52.635832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.422 qpair failed and we were unable to recover it. 00:30:55.422 [2024-11-28 08:29:52.636093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.422 [2024-11-28 08:29:52.636121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.422 qpair failed and we were unable to recover it. 00:30:55.422 [2024-11-28 08:29:52.636514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.422 [2024-11-28 08:29:52.636544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.422 qpair failed and we were unable to recover it. 00:30:55.422 [2024-11-28 08:29:52.636813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.423 [2024-11-28 08:29:52.636841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.423 qpair failed and we were unable to recover it. 00:30:55.423 [2024-11-28 08:29:52.637204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.423 [2024-11-28 08:29:52.637233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.423 qpair failed and we were unable to recover it. 00:30:55.423 [2024-11-28 08:29:52.637632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.423 [2024-11-28 08:29:52.637660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.423 qpair failed and we were unable to recover it. 00:30:55.423 [2024-11-28 08:29:52.638029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.423 [2024-11-28 08:29:52.638057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.423 qpair failed and we were unable to recover it. 00:30:55.423 [2024-11-28 08:29:52.638418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.423 [2024-11-28 08:29:52.638448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.423 qpair failed and we were unable to recover it. 00:30:55.423 [2024-11-28 08:29:52.638804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.423 [2024-11-28 08:29:52.638833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.423 qpair failed and we were unable to recover it. 00:30:55.423 [2024-11-28 08:29:52.639204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.423 [2024-11-28 08:29:52.639234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.423 qpair failed and we were unable to recover it. 00:30:55.423 [2024-11-28 08:29:52.639493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.423 [2024-11-28 08:29:52.639521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.423 qpair failed and we were unable to recover it. 00:30:55.423 [2024-11-28 08:29:52.639886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.423 [2024-11-28 08:29:52.639914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.423 qpair failed and we were unable to recover it. 00:30:55.423 [2024-11-28 08:29:52.640220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.423 [2024-11-28 08:29:52.640250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.423 qpair failed and we were unable to recover it. 00:30:55.423 [2024-11-28 08:29:52.640636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.423 [2024-11-28 08:29:52.640665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.423 qpair failed and we were unable to recover it. 00:30:55.423 [2024-11-28 08:29:52.641047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.423 [2024-11-28 08:29:52.641075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.423 qpair failed and we were unable to recover it. 00:30:55.423 [2024-11-28 08:29:52.641489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.423 [2024-11-28 08:29:52.641519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.423 qpair failed and we were unable to recover it. 00:30:55.423 [2024-11-28 08:29:52.641865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.423 [2024-11-28 08:29:52.641893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.423 qpair failed and we were unable to recover it. 00:30:55.423 [2024-11-28 08:29:52.642254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.423 [2024-11-28 08:29:52.642285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.423 qpair failed and we were unable to recover it. 00:30:55.423 [2024-11-28 08:29:52.642547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.423 [2024-11-28 08:29:52.642576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.423 qpair failed and we were unable to recover it. 00:30:55.423 [2024-11-28 08:29:52.642814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.423 [2024-11-28 08:29:52.642842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.423 qpair failed and we were unable to recover it. 00:30:55.423 [2024-11-28 08:29:52.643190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.423 [2024-11-28 08:29:52.643220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.423 qpair failed and we were unable to recover it. 00:30:55.423 [2024-11-28 08:29:52.643569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.423 [2024-11-28 08:29:52.643598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.423 qpair failed and we were unable to recover it. 00:30:55.423 [2024-11-28 08:29:52.643957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.423 [2024-11-28 08:29:52.643987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.423 qpair failed and we were unable to recover it. 00:30:55.423 [2024-11-28 08:29:52.644343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.423 [2024-11-28 08:29:52.644373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.423 qpair failed and we were unable to recover it. 00:30:55.423 [2024-11-28 08:29:52.644716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.423 [2024-11-28 08:29:52.644746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.423 qpair failed and we were unable to recover it. 00:30:55.423 [2024-11-28 08:29:52.645103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.423 [2024-11-28 08:29:52.645131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.423 qpair failed and we were unable to recover it. 00:30:55.423 [2024-11-28 08:29:52.645517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.423 [2024-11-28 08:29:52.645546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.423 qpair failed and we were unable to recover it. 00:30:55.423 [2024-11-28 08:29:52.645912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.423 [2024-11-28 08:29:52.645947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.423 qpair failed and we were unable to recover it. 00:30:55.423 [2024-11-28 08:29:52.646247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.423 [2024-11-28 08:29:52.646277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.423 qpair failed and we were unable to recover it. 00:30:55.423 [2024-11-28 08:29:52.646697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.423 [2024-11-28 08:29:52.646725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.423 qpair failed and we were unable to recover it. 00:30:55.423 [2024-11-28 08:29:52.647091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.423 [2024-11-28 08:29:52.647120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.423 qpair failed and we were unable to recover it. 00:30:55.423 [2024-11-28 08:29:52.647496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.424 [2024-11-28 08:29:52.647526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.424 qpair failed and we were unable to recover it. 00:30:55.424 [2024-11-28 08:29:52.647887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.424 [2024-11-28 08:29:52.647916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.424 qpair failed and we were unable to recover it. 00:30:55.424 [2024-11-28 08:29:52.648290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.424 [2024-11-28 08:29:52.648320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.424 qpair failed and we were unable to recover it. 00:30:55.424 [2024-11-28 08:29:52.648566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.424 [2024-11-28 08:29:52.648597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.424 qpair failed and we were unable to recover it. 00:30:55.424 [2024-11-28 08:29:52.648968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.424 [2024-11-28 08:29:52.648997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.424 qpair failed and we were unable to recover it. 00:30:55.424 [2024-11-28 08:29:52.649260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.424 [2024-11-28 08:29:52.649290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.424 qpair failed and we were unable to recover it. 00:30:55.424 [2024-11-28 08:29:52.649670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.424 [2024-11-28 08:29:52.649698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.424 qpair failed and we were unable to recover it. 00:30:55.424 [2024-11-28 08:29:52.650059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.424 [2024-11-28 08:29:52.650087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.424 qpair failed and we were unable to recover it. 00:30:55.424 [2024-11-28 08:29:52.650453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.424 [2024-11-28 08:29:52.650484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.424 qpair failed and we were unable to recover it. 00:30:55.424 [2024-11-28 08:29:52.650824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.424 [2024-11-28 08:29:52.650853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.424 qpair failed and we were unable to recover it. 00:30:55.424 [2024-11-28 08:29:52.651202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.424 [2024-11-28 08:29:52.651232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.424 qpair failed and we were unable to recover it. 00:30:55.424 [2024-11-28 08:29:52.651623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.424 [2024-11-28 08:29:52.651651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.424 qpair failed and we were unable to recover it. 00:30:55.424 [2024-11-28 08:29:52.652016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.424 [2024-11-28 08:29:52.652045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.424 qpair failed and we were unable to recover it. 00:30:55.424 [2024-11-28 08:29:52.652462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.424 [2024-11-28 08:29:52.652492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.424 qpair failed and we were unable to recover it. 00:30:55.424 [2024-11-28 08:29:52.652836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.424 [2024-11-28 08:29:52.652864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.424 qpair failed and we were unable to recover it. 00:30:55.424 [2024-11-28 08:29:52.653117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.424 [2024-11-28 08:29:52.653148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.424 qpair failed and we were unable to recover it. 00:30:55.424 [2024-11-28 08:29:52.653518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.424 [2024-11-28 08:29:52.653548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.424 qpair failed and we were unable to recover it. 00:30:55.424 [2024-11-28 08:29:52.653804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.424 [2024-11-28 08:29:52.653832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.424 qpair failed and we were unable to recover it. 00:30:55.424 [2024-11-28 08:29:52.654187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.424 [2024-11-28 08:29:52.654216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.424 qpair failed and we were unable to recover it. 00:30:55.424 [2024-11-28 08:29:52.654575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.424 [2024-11-28 08:29:52.654603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.424 qpair failed and we were unable to recover it. 00:30:55.424 [2024-11-28 08:29:52.654964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.424 [2024-11-28 08:29:52.654993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.424 qpair failed and we were unable to recover it. 00:30:55.424 [2024-11-28 08:29:52.655372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.424 [2024-11-28 08:29:52.655401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.424 qpair failed and we were unable to recover it. 00:30:55.424 [2024-11-28 08:29:52.655764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.424 [2024-11-28 08:29:52.655792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.424 qpair failed and we were unable to recover it. 00:30:55.424 [2024-11-28 08:29:52.656170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.424 [2024-11-28 08:29:52.656200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.424 qpair failed and we were unable to recover it. 00:30:55.424 [2024-11-28 08:29:52.656607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.424 [2024-11-28 08:29:52.656635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.424 qpair failed and we were unable to recover it. 00:30:55.424 [2024-11-28 08:29:52.657015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.424 [2024-11-28 08:29:52.657043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.424 qpair failed and we were unable to recover it. 00:30:55.424 [2024-11-28 08:29:52.657416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.424 [2024-11-28 08:29:52.657445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.424 qpair failed and we were unable to recover it. 00:30:55.424 [2024-11-28 08:29:52.657806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.424 [2024-11-28 08:29:52.657836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.424 qpair failed and we were unable to recover it. 00:30:55.424 [2024-11-28 08:29:52.658189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.424 [2024-11-28 08:29:52.658219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.424 qpair failed and we were unable to recover it. 00:30:55.424 [2024-11-28 08:29:52.658573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.424 [2024-11-28 08:29:52.658601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.424 qpair failed and we were unable to recover it. 00:30:55.425 [2024-11-28 08:29:52.658969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.425 [2024-11-28 08:29:52.658997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.425 qpair failed and we were unable to recover it. 00:30:55.425 [2024-11-28 08:29:52.659340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.425 [2024-11-28 08:29:52.659370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.425 qpair failed and we were unable to recover it. 00:30:55.425 [2024-11-28 08:29:52.659733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.425 [2024-11-28 08:29:52.659761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.425 qpair failed and we were unable to recover it. 00:30:55.425 [2024-11-28 08:29:52.660117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.425 [2024-11-28 08:29:52.660146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.425 qpair failed and we were unable to recover it. 00:30:55.425 [2024-11-28 08:29:52.660496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.425 [2024-11-28 08:29:52.660524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.425 qpair failed and we were unable to recover it. 00:30:55.425 [2024-11-28 08:29:52.660890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.425 [2024-11-28 08:29:52.660918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.425 qpair failed and we were unable to recover it. 00:30:55.425 [2024-11-28 08:29:52.661287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.425 [2024-11-28 08:29:52.661318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.425 qpair failed and we were unable to recover it. 00:30:55.425 [2024-11-28 08:29:52.661575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.425 [2024-11-28 08:29:52.661608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.425 qpair failed and we were unable to recover it. 00:30:55.425 [2024-11-28 08:29:52.661952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.425 [2024-11-28 08:29:52.661982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.425 qpair failed and we were unable to recover it. 00:30:55.425 [2024-11-28 08:29:52.662357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.425 [2024-11-28 08:29:52.662388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.425 qpair failed and we were unable to recover it. 00:30:55.425 [2024-11-28 08:29:52.662749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.425 [2024-11-28 08:29:52.662777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.425 qpair failed and we were unable to recover it. 00:30:55.425 [2024-11-28 08:29:52.663155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.425 [2024-11-28 08:29:52.663191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.425 qpair failed and we were unable to recover it. 00:30:55.425 [2024-11-28 08:29:52.663532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.425 [2024-11-28 08:29:52.663560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.425 qpair failed and we were unable to recover it. 00:30:55.425 [2024-11-28 08:29:52.663919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.425 [2024-11-28 08:29:52.663947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.425 qpair failed and we were unable to recover it. 00:30:55.425 [2024-11-28 08:29:52.664308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.425 [2024-11-28 08:29:52.664337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.425 qpair failed and we were unable to recover it. 00:30:55.425 [2024-11-28 08:29:52.664722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.425 [2024-11-28 08:29:52.664750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.425 qpair failed and we were unable to recover it. 00:30:55.425 [2024-11-28 08:29:52.665120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.425 [2024-11-28 08:29:52.665147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.425 qpair failed and we were unable to recover it. 00:30:55.425 [2024-11-28 08:29:52.665557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.425 [2024-11-28 08:29:52.665591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.425 qpair failed and we were unable to recover it. 00:30:55.425 [2024-11-28 08:29:52.665932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.425 [2024-11-28 08:29:52.665961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.425 qpair failed and we were unable to recover it. 00:30:55.425 [2024-11-28 08:29:52.666319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.425 [2024-11-28 08:29:52.666349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.425 qpair failed and we were unable to recover it. 00:30:55.425 [2024-11-28 08:29:52.666718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.425 [2024-11-28 08:29:52.666747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.425 qpair failed and we were unable to recover it. 00:30:55.425 [2024-11-28 08:29:52.667017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.425 [2024-11-28 08:29:52.667046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.425 qpair failed and we were unable to recover it. 00:30:55.425 [2024-11-28 08:29:52.667433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.425 [2024-11-28 08:29:52.667463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.425 qpair failed and we were unable to recover it. 00:30:55.425 [2024-11-28 08:29:52.667842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.425 [2024-11-28 08:29:52.667872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.425 qpair failed and we were unable to recover it. 00:30:55.425 [2024-11-28 08:29:52.668231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.425 [2024-11-28 08:29:52.668260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.425 qpair failed and we were unable to recover it. 00:30:55.425 [2024-11-28 08:29:52.668595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.425 [2024-11-28 08:29:52.668625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.425 qpair failed and we were unable to recover it. 00:30:55.425 [2024-11-28 08:29:52.668988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.425 [2024-11-28 08:29:52.669017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.425 qpair failed and we were unable to recover it. 00:30:55.425 [2024-11-28 08:29:52.669249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.425 [2024-11-28 08:29:52.669280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.425 qpair failed and we were unable to recover it. 00:30:55.425 [2024-11-28 08:29:52.669639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.425 [2024-11-28 08:29:52.669668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.425 qpair failed and we were unable to recover it. 00:30:55.425 [2024-11-28 08:29:52.670008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.425 [2024-11-28 08:29:52.670037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.426 qpair failed and we were unable to recover it. 00:30:55.426 [2024-11-28 08:29:52.670389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.426 [2024-11-28 08:29:52.670419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.426 qpair failed and we were unable to recover it. 00:30:55.426 [2024-11-28 08:29:52.670789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.426 [2024-11-28 08:29:52.670818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.426 qpair failed and we were unable to recover it. 00:30:55.426 [2024-11-28 08:29:52.671177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.426 [2024-11-28 08:29:52.671209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.426 qpair failed and we were unable to recover it. 00:30:55.426 [2024-11-28 08:29:52.671472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.426 [2024-11-28 08:29:52.671500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.426 qpair failed and we were unable to recover it. 00:30:55.426 [2024-11-28 08:29:52.671868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.426 [2024-11-28 08:29:52.671903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.426 qpair failed and we were unable to recover it. 00:30:55.426 [2024-11-28 08:29:52.672269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.426 [2024-11-28 08:29:52.672300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.426 qpair failed and we were unable to recover it. 00:30:55.426 [2024-11-28 08:29:52.672666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.426 [2024-11-28 08:29:52.672695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.426 qpair failed and we were unable to recover it. 00:30:55.426 [2024-11-28 08:29:52.672961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.426 [2024-11-28 08:29:52.672989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.426 qpair failed and we were unable to recover it. 00:30:55.426 [2024-11-28 08:29:52.673331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.426 [2024-11-28 08:29:52.673360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.426 qpair failed and we were unable to recover it. 00:30:55.426 [2024-11-28 08:29:52.673623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.426 [2024-11-28 08:29:52.673655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.426 qpair failed and we were unable to recover it. 00:30:55.426 [2024-11-28 08:29:52.674002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.426 [2024-11-28 08:29:52.674031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.426 qpair failed and we were unable to recover it. 00:30:55.698 [2024-11-28 08:29:52.674388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.698 [2024-11-28 08:29:52.674422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.698 qpair failed and we were unable to recover it. 00:30:55.698 [2024-11-28 08:29:52.674800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.698 [2024-11-28 08:29:52.674828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.698 qpair failed and we were unable to recover it. 00:30:55.698 [2024-11-28 08:29:52.675191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.698 [2024-11-28 08:29:52.675222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.698 qpair failed and we were unable to recover it. 00:30:55.698 [2024-11-28 08:29:52.675581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.698 [2024-11-28 08:29:52.675610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.698 qpair failed and we were unable to recover it. 00:30:55.698 [2024-11-28 08:29:52.675959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.698 [2024-11-28 08:29:52.675988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.698 qpair failed and we were unable to recover it. 00:30:55.698 [2024-11-28 08:29:52.676365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.698 [2024-11-28 08:29:52.676395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.698 qpair failed and we were unable to recover it. 00:30:55.698 [2024-11-28 08:29:52.676753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.698 [2024-11-28 08:29:52.676782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.698 qpair failed and we were unable to recover it. 00:30:55.698 [2024-11-28 08:29:52.677151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.698 [2024-11-28 08:29:52.677189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.698 qpair failed and we were unable to recover it. 00:30:55.698 [2024-11-28 08:29:52.677552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.698 [2024-11-28 08:29:52.677581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.698 qpair failed and we were unable to recover it. 00:30:55.698 [2024-11-28 08:29:52.677936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.698 [2024-11-28 08:29:52.677964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.698 qpair failed and we were unable to recover it. 00:30:55.698 [2024-11-28 08:29:52.678328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.698 [2024-11-28 08:29:52.678360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.698 qpair failed and we were unable to recover it. 00:30:55.698 [2024-11-28 08:29:52.678731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.698 [2024-11-28 08:29:52.678760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.698 qpair failed and we were unable to recover it. 00:30:55.698 [2024-11-28 08:29:52.679125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.698 [2024-11-28 08:29:52.679154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.698 qpair failed and we were unable to recover it. 00:30:55.698 [2024-11-28 08:29:52.679501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.698 [2024-11-28 08:29:52.679529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.698 qpair failed and we were unable to recover it. 00:30:55.698 [2024-11-28 08:29:52.679900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.698 [2024-11-28 08:29:52.679928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.698 qpair failed and we were unable to recover it. 00:30:55.698 [2024-11-28 08:29:52.680282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.698 [2024-11-28 08:29:52.680311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.698 qpair failed and we were unable to recover it. 00:30:55.698 [2024-11-28 08:29:52.680573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.698 [2024-11-28 08:29:52.680601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.698 qpair failed and we were unable to recover it. 00:30:55.698 [2024-11-28 08:29:52.680957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.698 [2024-11-28 08:29:52.680985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.698 qpair failed and we were unable to recover it. 00:30:55.698 [2024-11-28 08:29:52.681351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.698 [2024-11-28 08:29:52.681382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.698 qpair failed and we were unable to recover it. 00:30:55.698 [2024-11-28 08:29:52.681730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.698 [2024-11-28 08:29:52.681758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.698 qpair failed and we were unable to recover it. 00:30:55.698 [2024-11-28 08:29:52.682123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.698 [2024-11-28 08:29:52.682168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.698 qpair failed and we were unable to recover it. 00:30:55.698 [2024-11-28 08:29:52.682540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.698 [2024-11-28 08:29:52.682577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.698 qpair failed and we were unable to recover it. 00:30:55.698 [2024-11-28 08:29:52.682818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.698 [2024-11-28 08:29:52.682851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.698 qpair failed and we were unable to recover it. 00:30:55.698 [2024-11-28 08:29:52.683190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.698 [2024-11-28 08:29:52.683221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.698 qpair failed and we were unable to recover it. 00:30:55.698 [2024-11-28 08:29:52.683571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.698 [2024-11-28 08:29:52.683599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.698 qpair failed and we were unable to recover it. 00:30:55.698 [2024-11-28 08:29:52.683963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.698 [2024-11-28 08:29:52.683991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.698 qpair failed and we were unable to recover it. 00:30:55.698 [2024-11-28 08:29:52.684362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.698 [2024-11-28 08:29:52.684393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.698 qpair failed and we were unable to recover it. 00:30:55.698 [2024-11-28 08:29:52.684742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.698 [2024-11-28 08:29:52.684771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.698 qpair failed and we were unable to recover it. 00:30:55.698 [2024-11-28 08:29:52.685138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.698 [2024-11-28 08:29:52.685175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.698 qpair failed and we were unable to recover it. 00:30:55.698 [2024-11-28 08:29:52.685541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.698 [2024-11-28 08:29:52.685570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.698 qpair failed and we were unable to recover it. 00:30:55.698 [2024-11-28 08:29:52.685941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.698 [2024-11-28 08:29:52.685971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.698 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.686322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.686351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.686703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.686731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.687098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.687126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.687518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.687548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.687794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.687826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.688183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.688215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.688571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.688601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.688996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.689024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.689400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.689430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.689794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.689822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.690191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.690221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.690586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.690616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.690968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.690996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.691380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.691410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.691759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.691787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.692145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.692184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.692523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.692552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.692931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.692960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.693373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.693403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.693742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.693770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.694129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.694167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.694526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.694554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.694910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.694940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.695295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.695325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.695693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.695722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.696080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.696108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.696475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.696505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.696872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.696900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.697236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.697266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.697643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.697671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.698040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.698070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.698441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.698471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.698850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.698878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.699242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.699272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.699637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.699666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.700030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.700059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.700423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.700452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.699 [2024-11-28 08:29:52.700819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.699 [2024-11-28 08:29:52.700848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.699 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.701122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.701150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.701541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.701570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.701933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.701962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.702343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.702373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.702731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.702759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.703124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.703152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.703480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.703509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.703869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.703898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.704262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.704291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.704563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.704591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.704947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.704975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.705243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.705273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.705678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.705707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.706076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.706104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.706541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.706571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.706999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.707028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.707294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.707324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.707702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.707732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.708094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.708122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.708482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.708517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.708782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.708810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.709175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.709206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.709545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.709574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.709944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.709972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.710346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.710376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.710747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.710775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.711214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.711245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.711596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.711626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.711971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.712000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.712366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.712395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.712661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.712689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.713038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.713066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.713404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.713435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.713806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.713834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.714196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.714225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.714623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.714652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.715011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.715040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.715291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.700 [2024-11-28 08:29:52.715321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.700 qpair failed and we were unable to recover it. 00:30:55.700 [2024-11-28 08:29:52.715708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.715737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.716106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.716134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.716525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.716561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.716894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.716921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.717290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.717321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.717702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.717731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.718093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.718121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.718485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.718514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.718935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.718970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.719309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.719338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.719660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.719688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.720137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.720175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.720536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.720564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.720924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.720953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.721320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.721350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.721715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.721744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.722103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.722132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.722506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.722535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.722887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.722915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.723285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.723316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.723570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.723599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.723961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.723988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.724343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.724373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.724744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.724773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.725137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.725179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.725529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.725558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.725915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.725943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.726307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.726337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.726704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.726733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.726972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.727003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.727386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.727417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.727776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.727807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.728179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.728209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.728541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.728571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.728930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.728960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.729335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.729371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.729721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.729752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.730125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.730154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.701 [2024-11-28 08:29:52.730519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.701 [2024-11-28 08:29:52.730548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.701 qpair failed and we were unable to recover it. 00:30:55.702 [2024-11-28 08:29:52.730925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.702 [2024-11-28 08:29:52.730953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.702 qpair failed and we were unable to recover it. 00:30:55.702 [2024-11-28 08:29:52.731242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.702 [2024-11-28 08:29:52.731271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.702 qpair failed and we were unable to recover it. 00:30:55.702 [2024-11-28 08:29:52.731685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.702 [2024-11-28 08:29:52.731713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.702 qpair failed and we were unable to recover it. 00:30:55.702 [2024-11-28 08:29:52.732069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.702 [2024-11-28 08:29:52.732097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.702 qpair failed and we were unable to recover it. 00:30:55.702 [2024-11-28 08:29:52.732438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.702 [2024-11-28 08:29:52.732466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.702 qpair failed and we were unable to recover it. 00:30:55.702 [2024-11-28 08:29:52.732829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.702 [2024-11-28 08:29:52.732858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.702 qpair failed and we were unable to recover it. 00:30:55.702 [2024-11-28 08:29:52.733225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.702 [2024-11-28 08:29:52.733254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.702 qpair failed and we were unable to recover it. 00:30:55.702 [2024-11-28 08:29:52.733687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.702 [2024-11-28 08:29:52.733715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.702 qpair failed and we were unable to recover it. 00:30:55.702 [2024-11-28 08:29:52.734076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.702 [2024-11-28 08:29:52.734104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.702 qpair failed and we were unable to recover it. 00:30:55.702 [2024-11-28 08:29:52.734478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.702 [2024-11-28 08:29:52.734508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.702 qpair failed and we were unable to recover it. 00:30:55.702 [2024-11-28 08:29:52.734866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.702 [2024-11-28 08:29:52.734896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.702 qpair failed and we were unable to recover it. 00:30:55.702 [2024-11-28 08:29:52.735247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.702 [2024-11-28 08:29:52.735278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.702 qpair failed and we were unable to recover it. 00:30:55.702 [2024-11-28 08:29:52.735539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.702 [2024-11-28 08:29:52.735567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.702 qpair failed and we were unable to recover it. 00:30:55.702 [2024-11-28 08:29:52.735835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.702 [2024-11-28 08:29:52.735865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.702 qpair failed and we were unable to recover it. 00:30:55.702 [2024-11-28 08:29:52.736221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.702 [2024-11-28 08:29:52.736251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.702 qpair failed and we were unable to recover it. 00:30:55.702 [2024-11-28 08:29:52.736643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.702 [2024-11-28 08:29:52.736671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.702 qpair failed and we were unable to recover it. 00:30:55.702 [2024-11-28 08:29:52.737115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.702 [2024-11-28 08:29:52.737143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.702 qpair failed and we were unable to recover it. 00:30:55.702 [2024-11-28 08:29:52.737520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.702 [2024-11-28 08:29:52.737551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.702 qpair failed and we were unable to recover it. 00:30:55.702 [2024-11-28 08:29:52.737925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.702 [2024-11-28 08:29:52.737953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.702 qpair failed and we were unable to recover it. 00:30:55.702 [2024-11-28 08:29:52.738318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.702 [2024-11-28 08:29:52.738348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.702 qpair failed and we were unable to recover it. 00:30:55.702 [2024-11-28 08:29:52.738717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.702 [2024-11-28 08:29:52.738746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.702 qpair failed and we were unable to recover it. 00:30:55.702 [2024-11-28 08:29:52.739105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.702 [2024-11-28 08:29:52.739132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.702 qpair failed and we were unable to recover it. 00:30:55.702 [2024-11-28 08:29:52.739375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.702 [2024-11-28 08:29:52.739407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.702 qpair failed and we were unable to recover it. 00:30:55.702 [2024-11-28 08:29:52.739765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.702 [2024-11-28 08:29:52.739794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.702 qpair failed and we were unable to recover it. 00:30:55.702 [2024-11-28 08:29:52.740156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.702 [2024-11-28 08:29:52.740196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.702 qpair failed and we were unable to recover it. 00:30:55.702 [2024-11-28 08:29:52.740532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.702 [2024-11-28 08:29:52.740560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.702 qpair failed and we were unable to recover it. 00:30:55.702 [2024-11-28 08:29:52.740928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.702 [2024-11-28 08:29:52.740959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.702 qpair failed and we were unable to recover it. 00:30:55.702 [2024-11-28 08:29:52.741374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.702 [2024-11-28 08:29:52.741404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.702 qpair failed and we were unable to recover it. 00:30:55.702 [2024-11-28 08:29:52.741744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.702 [2024-11-28 08:29:52.741773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.702 qpair failed and we were unable to recover it. 00:30:55.702 [2024-11-28 08:29:52.742141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.702 [2024-11-28 08:29:52.742178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.702 qpair failed and we were unable to recover it. 00:30:55.702 [2024-11-28 08:29:52.742523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.702 [2024-11-28 08:29:52.742551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.702 qpair failed and we were unable to recover it. 00:30:55.702 [2024-11-28 08:29:52.742928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.702 [2024-11-28 08:29:52.742956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.743292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.743323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.743698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.743726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.744079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.744107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.744478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.744507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.744873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.744901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.745294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.745330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.745620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.745648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.746010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.746038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.746407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.746436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.746789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.746817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.747249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.747279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.747635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.747664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.748099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.748128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.748461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.748492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.748857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.748885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.749254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.749286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.749654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.749682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.750024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.750052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.750397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.750428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.750806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.750836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.751079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.751107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.751471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.751501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.751875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.751903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.752281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.752310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.752694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.752723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.752965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.752993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.753346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.753376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.753750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.753779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.754143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.754180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.754535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.754564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.754928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.754957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.755339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.755368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.755734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.755768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.756131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.756166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.756535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.756563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.756862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.756891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.757246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.757276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.703 qpair failed and we were unable to recover it. 00:30:55.703 [2024-11-28 08:29:52.757662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.703 [2024-11-28 08:29:52.757690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.758058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.758086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.758485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.758514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.758843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.758872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.759301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.759331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.759561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.759592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.759959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.759988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.760299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.760329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.760690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.760718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.761080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.761109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.761473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.761503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.761868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.761896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.762259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.762291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.762653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.762681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.763060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.763088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.763452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.763481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.763695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.763727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.764090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.764119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.764459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.764488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.764834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.764865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.765226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.765256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.765610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.765639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.765989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.766024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.766426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.766456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.766827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.766855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.767215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.767245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.767602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.767630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.768031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.768059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.768437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.768466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.768831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.768859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.769206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.769236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.769605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.769633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.769973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.770001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.770363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.770395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.770820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.770848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.771183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.771213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.771568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.771597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.771961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.771990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.704 qpair failed and we were unable to recover it. 00:30:55.704 [2024-11-28 08:29:52.772241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.704 [2024-11-28 08:29:52.772270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.772622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.772651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.773017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.773046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.773394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.773424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.773762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.773791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.774131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.774169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.774507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.774536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.774901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.774929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.775296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.775327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.775697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.775725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.776086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.776114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.776534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.776564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.776927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.776955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.777323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.777354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.777701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.777730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.778103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.778131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.778375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.778407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.778761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.778790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.779174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.779204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.779452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.779482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.779859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.779887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.780270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.780302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.780667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.780696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.781066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.781095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.781489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.781520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.781896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.781928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.782198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.782230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.782579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.782609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.782971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.782999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.783366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.783395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.783761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.783790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.784154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.784193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.784593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.784621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.784982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.785010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.785386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.785416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.785777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.785805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.786180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.786209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.786567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.786595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.786963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.705 [2024-11-28 08:29:52.786992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.705 qpair failed and we were unable to recover it. 00:30:55.705 [2024-11-28 08:29:52.787344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.787374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.787733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.787762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.788136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.788171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.788532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.788560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.788923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.788952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.789315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.789345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.789687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.789716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.790095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.790124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.790534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.790564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.790910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.790939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.791316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.791346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.791708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.791737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.792102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.792130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.792471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.792507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.792854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.792884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.793244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.793274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.793643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.793671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.794033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.794061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.794427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.794456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.794817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.794845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.795222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.795251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.795634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.795662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.796009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.796039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.796404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.796433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.796795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.796824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.797185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.797216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.797539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.797568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.797930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.797959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.798322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.798352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.798722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.798750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.799111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.799140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.799515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.799544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.799884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.799912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.800165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.800195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.800562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.800591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.800966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.800994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.801344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.801374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.801727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.801755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.706 qpair failed and we were unable to recover it. 00:30:55.706 [2024-11-28 08:29:52.802117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.706 [2024-11-28 08:29:52.802146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.707 qpair failed and we were unable to recover it. 00:30:55.707 [2024-11-28 08:29:52.802395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.707 [2024-11-28 08:29:52.802427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.707 qpair failed and we were unable to recover it. 00:30:55.707 [2024-11-28 08:29:52.802796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.707 [2024-11-28 08:29:52.802832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.707 qpair failed and we were unable to recover it. 00:30:55.707 [2024-11-28 08:29:52.803193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.707 [2024-11-28 08:29:52.803223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.707 qpair failed and we were unable to recover it. 00:30:55.707 [2024-11-28 08:29:52.803588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.707 [2024-11-28 08:29:52.803616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.707 qpair failed and we were unable to recover it. 00:30:55.707 [2024-11-28 08:29:52.803996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.707 [2024-11-28 08:29:52.804025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.707 qpair failed and we were unable to recover it. 00:30:55.707 [2024-11-28 08:29:52.804398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.707 [2024-11-28 08:29:52.804427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.707 qpair failed and we were unable to recover it. 00:30:55.707 [2024-11-28 08:29:52.804804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.707 [2024-11-28 08:29:52.804832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.707 qpair failed and we were unable to recover it. 00:30:55.707 [2024-11-28 08:29:52.805188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.707 [2024-11-28 08:29:52.805219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.707 qpair failed and we were unable to recover it. 00:30:55.707 [2024-11-28 08:29:52.805464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.707 [2024-11-28 08:29:52.805495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.707 qpair failed and we were unable to recover it. 00:30:55.707 [2024-11-28 08:29:52.805857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.707 [2024-11-28 08:29:52.805888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.707 qpair failed and we were unable to recover it. 00:30:55.707 [2024-11-28 08:29:52.806245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.707 [2024-11-28 08:29:52.806274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.707 qpair failed and we were unable to recover it. 00:30:55.707 [2024-11-28 08:29:52.806522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.707 [2024-11-28 08:29:52.806549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.707 qpair failed and we were unable to recover it. 00:30:55.707 [2024-11-28 08:29:52.806897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.707 [2024-11-28 08:29:52.806925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.707 qpair failed and we were unable to recover it. 00:30:55.707 [2024-11-28 08:29:52.807291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.707 [2024-11-28 08:29:52.807323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.707 qpair failed and we were unable to recover it. 00:30:55.707 [2024-11-28 08:29:52.807682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.707 [2024-11-28 08:29:52.807710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.707 qpair failed and we were unable to recover it. 00:30:55.707 [2024-11-28 08:29:52.808072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.707 [2024-11-28 08:29:52.808101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.707 qpair failed and we were unable to recover it. 00:30:55.707 [2024-11-28 08:29:52.808471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.707 [2024-11-28 08:29:52.808500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.707 qpair failed and we were unable to recover it. 00:30:55.707 [2024-11-28 08:29:52.808858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.707 [2024-11-28 08:29:52.808886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.707 qpair failed and we were unable to recover it. 00:30:55.707 [2024-11-28 08:29:52.809249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.707 [2024-11-28 08:29:52.809280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.707 qpair failed and we were unable to recover it. 00:30:55.707 [2024-11-28 08:29:52.809654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.707 [2024-11-28 08:29:52.809683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.707 qpair failed and we were unable to recover it. 00:30:55.707 [2024-11-28 08:29:52.810045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.707 [2024-11-28 08:29:52.810073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.707 qpair failed and we were unable to recover it. 00:30:55.707 [2024-11-28 08:29:52.810321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.707 [2024-11-28 08:29:52.810350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.707 qpair failed and we were unable to recover it. 00:30:55.707 [2024-11-28 08:29:52.810736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.707 [2024-11-28 08:29:52.810765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.707 qpair failed and we were unable to recover it. 00:30:55.707 [2024-11-28 08:29:52.811130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.707 [2024-11-28 08:29:52.811168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.707 qpair failed and we were unable to recover it. 00:30:55.707 [2024-11-28 08:29:52.811531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.707 [2024-11-28 08:29:52.811560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.707 qpair failed and we were unable to recover it. 00:30:55.707 [2024-11-28 08:29:52.811919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.707 [2024-11-28 08:29:52.811948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.707 qpair failed and we were unable to recover it. 00:30:55.707 [2024-11-28 08:29:52.812169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.707 [2024-11-28 08:29:52.812200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.707 qpair failed and we were unable to recover it. 00:30:55.707 [2024-11-28 08:29:52.812571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.707 [2024-11-28 08:29:52.812600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.707 qpair failed and we were unable to recover it. 00:30:55.707 [2024-11-28 08:29:52.812961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.707 [2024-11-28 08:29:52.812996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.707 qpair failed and we were unable to recover it. 00:30:55.707 [2024-11-28 08:29:52.813343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.707 [2024-11-28 08:29:52.813374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.707 qpair failed and we were unable to recover it. 00:30:55.707 [2024-11-28 08:29:52.813619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.707 [2024-11-28 08:29:52.813650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.707 qpair failed and we were unable to recover it. 00:30:55.707 [2024-11-28 08:29:52.813997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.707 [2024-11-28 08:29:52.814025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.707 qpair failed and we were unable to recover it. 00:30:55.707 [2024-11-28 08:29:52.814390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.707 [2024-11-28 08:29:52.814421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.814776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.814804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.815149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.815193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.815589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.815617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.815975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.816003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.816341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.816372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.816734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.816765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.817123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.817151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.817533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.817564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.817918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.817947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.818308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.818339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.818707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.818736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.819106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.819135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.819555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.819584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.819943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.819971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.820246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.820279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.820556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.820584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.820963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.820991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.821365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.821395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.821755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.821784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.822028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.822059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.822323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.822353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.822713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.822742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.822989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.823017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.823391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.823422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.823781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.823810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.824180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.824209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.824584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.824612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.824972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.825003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.825376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.825407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.825842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.825871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.826226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.826257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.826625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.826654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.827014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.827043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.827413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.827443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.827687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.827716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.828063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.828091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.828457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.828492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.708 [2024-11-28 08:29:52.828870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.708 [2024-11-28 08:29:52.828900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.708 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.829344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.829374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.829732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.829760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.830151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.830209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.830588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.830617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.830969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.831000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.831339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.831370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.831653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.831681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.832037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.832066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.832415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.832446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.832825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.832854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.833215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.833246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.833614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.833643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.834003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.834033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.834385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.834416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.834783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.834811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.835189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.835219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.835594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.835623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.836018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.836048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.836350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.836383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.836629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.836659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.836906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.836937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.837325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.837355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.837621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.837649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.838034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.838063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.838329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.838359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.838737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.838773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.839035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.839063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.839319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.839350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.839711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.839740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.840096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.840124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.840542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.840573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.840914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.840942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.841278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.841309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.841661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.841691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.842049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.842077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.842427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.842458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.842842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.842872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.843238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.843269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.709 qpair failed and we were unable to recover it. 00:30:55.709 [2024-11-28 08:29:52.843642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.709 [2024-11-28 08:29:52.843672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.844038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.844070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.844357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.844386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.844738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.844767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.845143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.845182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.845584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.845613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.845977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.846006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.846403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.846433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.846831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.846861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.847234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.847265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.847619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.847648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.847987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.848018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.848296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.848325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.848723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.848752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.849119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.849157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.849567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.849608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.849966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.849995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.850351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.850382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.850636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.850664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.851120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.851149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.851512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.851542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.852003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.852035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.852301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.852332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.852580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.852612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.853016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.853048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.853403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.853434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.853795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.853824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.854192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.854223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.854485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.854516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.854769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.854798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.855149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.855188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.855470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.855500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.855907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.855936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.856325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.856355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.856613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.856647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.856903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.856931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.857312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.857343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.857571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.857599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.710 [2024-11-28 08:29:52.857990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.710 [2024-11-28 08:29:52.858018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.710 qpair failed and we were unable to recover it. 00:30:55.711 [2024-11-28 08:29:52.858381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.711 [2024-11-28 08:29:52.858412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.711 qpair failed and we were unable to recover it. 00:30:55.711 [2024-11-28 08:29:52.858777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.711 [2024-11-28 08:29:52.858806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.711 qpair failed and we were unable to recover it. 00:30:55.711 [2024-11-28 08:29:52.859186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.711 [2024-11-28 08:29:52.859215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.711 qpair failed and we were unable to recover it. 00:30:55.711 [2024-11-28 08:29:52.859495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.711 [2024-11-28 08:29:52.859528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.711 qpair failed and we were unable to recover it. 00:30:55.711 [2024-11-28 08:29:52.859785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.711 [2024-11-28 08:29:52.859815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.711 qpair failed and we were unable to recover it. 00:30:55.711 [2024-11-28 08:29:52.860184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.711 [2024-11-28 08:29:52.860215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.711 qpair failed and we were unable to recover it. 00:30:55.711 [2024-11-28 08:29:52.860477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.711 [2024-11-28 08:29:52.860505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.711 qpair failed and we were unable to recover it. 00:30:55.711 [2024-11-28 08:29:52.860884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.711 [2024-11-28 08:29:52.860914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.711 qpair failed and we were unable to recover it. 00:30:55.711 [2024-11-28 08:29:52.861273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.711 [2024-11-28 08:29:52.861303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.711 qpair failed and we were unable to recover it. 00:30:55.711 [2024-11-28 08:29:52.861516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.711 [2024-11-28 08:29:52.861547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.711 qpair failed and we were unable to recover it. 00:30:55.711 [2024-11-28 08:29:52.861813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.711 [2024-11-28 08:29:52.861842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.711 qpair failed and we were unable to recover it. 00:30:55.711 [2024-11-28 08:29:52.862224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.711 [2024-11-28 08:29:52.862255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.711 qpair failed and we were unable to recover it. 00:30:55.711 [2024-11-28 08:29:52.862500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.711 [2024-11-28 08:29:52.862530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.711 qpair failed and we were unable to recover it. 00:30:55.711 [2024-11-28 08:29:52.862909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.711 [2024-11-28 08:29:52.862938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.711 qpair failed and we were unable to recover it. 00:30:55.711 [2024-11-28 08:29:52.863274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.711 [2024-11-28 08:29:52.863304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.711 qpair failed and we were unable to recover it. 00:30:55.711 [2024-11-28 08:29:52.863533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.711 [2024-11-28 08:29:52.863563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.711 qpair failed and we were unable to recover it. 00:30:55.711 [2024-11-28 08:29:52.863922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.711 [2024-11-28 08:29:52.863950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.711 qpair failed and we were unable to recover it. 00:30:55.711 [2024-11-28 08:29:52.864151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.711 [2024-11-28 08:29:52.864190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.711 qpair failed and we were unable to recover it. 00:30:55.711 [2024-11-28 08:29:52.864546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.711 [2024-11-28 08:29:52.864575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.711 qpair failed and we were unable to recover it. 00:30:55.711 [2024-11-28 08:29:52.864925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.711 [2024-11-28 08:29:52.864954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.711 qpair failed and we were unable to recover it. 00:30:55.711 [2024-11-28 08:29:52.865313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.711 [2024-11-28 08:29:52.865342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.711 qpair failed and we were unable to recover it. 00:30:55.711 [2024-11-28 08:29:52.865714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.711 [2024-11-28 08:29:52.865743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.711 qpair failed and we were unable to recover it. 00:30:55.711 [2024-11-28 08:29:52.866105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.711 [2024-11-28 08:29:52.866134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.711 qpair failed and we were unable to recover it. 00:30:55.711 [2024-11-28 08:29:52.866514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.711 [2024-11-28 08:29:52.866544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.711 qpair failed and we were unable to recover it. 00:30:55.711 [2024-11-28 08:29:52.866986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.711 [2024-11-28 08:29:52.867015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.711 qpair failed and we were unable to recover it. 00:30:55.711 [2024-11-28 08:29:52.867257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.711 [2024-11-28 08:29:52.867287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.711 qpair failed and we were unable to recover it. 00:30:55.711 [2024-11-28 08:29:52.867680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.711 [2024-11-28 08:29:52.867708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.711 qpair failed and we were unable to recover it. 00:30:55.711 [2024-11-28 08:29:52.868077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.711 [2024-11-28 08:29:52.868105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.711 qpair failed and we were unable to recover it. 00:30:55.711 [2024-11-28 08:29:52.868411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.711 [2024-11-28 08:29:52.868440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.711 qpair failed and we were unable to recover it. 00:30:55.711 [2024-11-28 08:29:52.868801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.711 [2024-11-28 08:29:52.868831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.711 qpair failed and we were unable to recover it. 00:30:55.711 [2024-11-28 08:29:52.869250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.711 [2024-11-28 08:29:52.869281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.711 qpair failed and we were unable to recover it. 00:30:55.711 [2024-11-28 08:29:52.869527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.711 [2024-11-28 08:29:52.869559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.711 qpair failed and we were unable to recover it. 00:30:55.711 [2024-11-28 08:29:52.869965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.711 [2024-11-28 08:29:52.869994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.711 qpair failed and we were unable to recover it. 00:30:55.711 [2024-11-28 08:29:52.870359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.711 [2024-11-28 08:29:52.870390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.711 qpair failed and we were unable to recover it. 00:30:55.711 [2024-11-28 08:29:52.870747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.870777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.871142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.871179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.871627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.871657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.872017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.872045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.872397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.872428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.872783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.872812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.873156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.873197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.873577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.873605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.873981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.874009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.874385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.874421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.874807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.874834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.875195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.875224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.875467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.875495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.875893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.875921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.876291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.876320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.876697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.876724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.876999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.877027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.877402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.877430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.877695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.877722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.878101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.878129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.878514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.878543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.878827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.878855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.879208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.879238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.879626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.879653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.880023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.880051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.880415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.880444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.880739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.880766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.881174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.881203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.881436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.881464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.881845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.881873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.882241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.882269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.882658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.882685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.883042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.883069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.883438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.883468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.883844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.883871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.884236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.884265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.884635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.884670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.884927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.712 [2024-11-28 08:29:52.884954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.712 qpair failed and we were unable to recover it. 00:30:55.712 [2024-11-28 08:29:52.885310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.885342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.885730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.885758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.886130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.886166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.886401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.886429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.886794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.886821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.887184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.887214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.887590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.887618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.887869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.887896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.888200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.888229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.888599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.888626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.888991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.889019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.889389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.889418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.889787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.889816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.890063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.890091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.890459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.890489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.890810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.890838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.891085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.891114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.891496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.891525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.891890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.891917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.892298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.892327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.892678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.892707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.892934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.892962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.893327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.893356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.893751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.893780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.894148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.894196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.894552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.894582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.894941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.894969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.895325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.895355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.895704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.895732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.896094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.896122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.896512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.896541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.896896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.896924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.897304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.897334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.897591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.897619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.897863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.897891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.898303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.898332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.898696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.898726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.898976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.899007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.713 qpair failed and we were unable to recover it. 00:30:55.713 [2024-11-28 08:29:52.899400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.713 [2024-11-28 08:29:52.899429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.899799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.899828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.900197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.900226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.900611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.900639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.900884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.900912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.901285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.901314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.901528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.901556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.901904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.901933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.902298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.902327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.902705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.902732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.903100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.903127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.903509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.903538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.903786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.903813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.904052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.904080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.904486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.904515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.904870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.904897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.905236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.905266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.905638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.905667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.906034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.906062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.906430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.906459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.906681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.906708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.906943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.906970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.907381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.907410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.907754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.907782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.908045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.908072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.908340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.908369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.908759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.908789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.909145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.909183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.909520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.909554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.909893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.909922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.910241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.910271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.910489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.910516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.910885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.910913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.911147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.911184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.911565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.911593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.911837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.911865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.912192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.912220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.912638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.912666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.714 [2024-11-28 08:29:52.913019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.714 [2024-11-28 08:29:52.913047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.714 qpair failed and we were unable to recover it. 00:30:55.715 [2024-11-28 08:29:52.913304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.715 [2024-11-28 08:29:52.913336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.715 qpair failed and we were unable to recover it. 00:30:55.715 [2024-11-28 08:29:52.913708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.715 [2024-11-28 08:29:52.913736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.715 qpair failed and we were unable to recover it. 00:30:55.715 [2024-11-28 08:29:52.914097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.715 [2024-11-28 08:29:52.914124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.715 qpair failed and we were unable to recover it. 00:30:55.715 [2024-11-28 08:29:52.914505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.715 [2024-11-28 08:29:52.914535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.715 qpair failed and we were unable to recover it. 00:30:55.715 [2024-11-28 08:29:52.914930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.715 [2024-11-28 08:29:52.914958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.715 qpair failed and we were unable to recover it. 00:30:55.715 [2024-11-28 08:29:52.915330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.715 [2024-11-28 08:29:52.915360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.715 qpair failed and we were unable to recover it. 00:30:55.715 [2024-11-28 08:29:52.915731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.715 [2024-11-28 08:29:52.915759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.715 qpair failed and we were unable to recover it. 00:30:55.715 [2024-11-28 08:29:52.916119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.715 [2024-11-28 08:29:52.916147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.715 qpair failed and we were unable to recover it. 00:30:55.715 [2024-11-28 08:29:52.916534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.715 [2024-11-28 08:29:52.916561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.715 qpair failed and we were unable to recover it. 00:30:55.715 [2024-11-28 08:29:52.916928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.715 [2024-11-28 08:29:52.916956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.715 qpair failed and we were unable to recover it. 00:30:55.715 [2024-11-28 08:29:52.917327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.715 [2024-11-28 08:29:52.917356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.715 qpair failed and we were unable to recover it. 00:30:55.715 [2024-11-28 08:29:52.917718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.715 [2024-11-28 08:29:52.917746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.715 qpair failed and we were unable to recover it. 00:30:55.715 [2024-11-28 08:29:52.918093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.715 [2024-11-28 08:29:52.918120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.715 qpair failed and we were unable to recover it. 00:30:55.715 [2024-11-28 08:29:52.918524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.715 [2024-11-28 08:29:52.918555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.715 qpair failed and we were unable to recover it. 00:30:55.715 [2024-11-28 08:29:52.918887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.715 [2024-11-28 08:29:52.918915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.715 qpair failed and we were unable to recover it. 00:30:55.715 [2024-11-28 08:29:52.919281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.715 [2024-11-28 08:29:52.919310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.715 qpair failed and we were unable to recover it. 00:30:55.715 [2024-11-28 08:29:52.919738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.715 [2024-11-28 08:29:52.919773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.715 qpair failed and we were unable to recover it. 00:30:55.715 [2024-11-28 08:29:52.920125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.715 [2024-11-28 08:29:52.920154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.715 qpair failed and we were unable to recover it. 00:30:55.715 [2024-11-28 08:29:52.920513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.715 [2024-11-28 08:29:52.920541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.715 qpair failed and we were unable to recover it. 00:30:55.715 [2024-11-28 08:29:52.920903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.715 [2024-11-28 08:29:52.920932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.715 qpair failed and we were unable to recover it. 00:30:55.715 [2024-11-28 08:29:52.921209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.715 [2024-11-28 08:29:52.921238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.715 qpair failed and we were unable to recover it. 00:30:55.715 [2024-11-28 08:29:52.921640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.715 [2024-11-28 08:29:52.921668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.715 qpair failed and we were unable to recover it. 00:30:55.715 [2024-11-28 08:29:52.922035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.715 [2024-11-28 08:29:52.922063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.715 qpair failed and we were unable to recover it. 00:30:55.715 [2024-11-28 08:29:52.922405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.715 [2024-11-28 08:29:52.922434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.715 qpair failed and we were unable to recover it. 00:30:55.715 [2024-11-28 08:29:52.922801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.715 [2024-11-28 08:29:52.922828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.715 qpair failed and we were unable to recover it. 00:30:55.715 [2024-11-28 08:29:52.923184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.715 [2024-11-28 08:29:52.923213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.715 qpair failed and we were unable to recover it. 00:30:55.715 [2024-11-28 08:29:52.923595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.715 [2024-11-28 08:29:52.923624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.715 qpair failed and we were unable to recover it. 00:30:55.715 [2024-11-28 08:29:52.923991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.715 [2024-11-28 08:29:52.924019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.715 qpair failed and we were unable to recover it. 00:30:55.715 [2024-11-28 08:29:52.924385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.715 [2024-11-28 08:29:52.924414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.715 qpair failed and we were unable to recover it. 00:30:55.715 [2024-11-28 08:29:52.924771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.715 [2024-11-28 08:29:52.924799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.715 qpair failed and we were unable to recover it. 00:30:55.715 [2024-11-28 08:29:52.925199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.984 [2024-11-28 08:29:53.140965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.984 qpair failed and we were unable to recover it. 00:30:55.984 [2024-11-28 08:29:53.141646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.984 [2024-11-28 08:29:53.141750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.984 qpair failed and we were unable to recover it. 00:30:55.984 [2024-11-28 08:29:53.142021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.984 [2024-11-28 08:29:53.142058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.984 qpair failed and we were unable to recover it. 00:30:55.984 [2024-11-28 08:29:53.142535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.984 [2024-11-28 08:29:53.142640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.984 qpair failed and we were unable to recover it. 00:30:55.984 [2024-11-28 08:29:53.143076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.984 [2024-11-28 08:29:53.143114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.984 qpair failed and we were unable to recover it. 00:30:55.984 [2024-11-28 08:29:53.143428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.984 [2024-11-28 08:29:53.143460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.984 qpair failed and we were unable to recover it. 00:30:55.984 [2024-11-28 08:29:53.143825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.984 [2024-11-28 08:29:53.143855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.984 qpair failed and we were unable to recover it. 00:30:55.984 [2024-11-28 08:29:53.144132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.984 [2024-11-28 08:29:53.144171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.984 qpair failed and we were unable to recover it. 00:30:55.984 [2024-11-28 08:29:53.144562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.984 [2024-11-28 08:29:53.144593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.984 qpair failed and we were unable to recover it. 00:30:55.984 [2024-11-28 08:29:53.144949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.984 [2024-11-28 08:29:53.144979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.984 qpair failed and we were unable to recover it. 00:30:55.984 [2024-11-28 08:29:53.145260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.984 [2024-11-28 08:29:53.145292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.984 qpair failed and we were unable to recover it. 00:30:55.984 [2024-11-28 08:29:53.145653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.984 [2024-11-28 08:29:53.145682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.984 qpair failed and we were unable to recover it. 00:30:55.984 [2024-11-28 08:29:53.146036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.984 [2024-11-28 08:29:53.146066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.984 qpair failed and we were unable to recover it. 00:30:55.984 [2024-11-28 08:29:53.146436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.984 [2024-11-28 08:29:53.146489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.984 qpair failed and we were unable to recover it. 00:30:55.984 [2024-11-28 08:29:53.146845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.985 [2024-11-28 08:29:53.146874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.985 qpair failed and we were unable to recover it. 00:30:55.985 [2024-11-28 08:29:53.147234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.985 [2024-11-28 08:29:53.147265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.985 qpair failed and we were unable to recover it. 00:30:55.985 [2024-11-28 08:29:53.147651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.985 [2024-11-28 08:29:53.147681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.985 qpair failed and we were unable to recover it. 00:30:55.985 [2024-11-28 08:29:53.148049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.985 [2024-11-28 08:29:53.148078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.985 qpair failed and we were unable to recover it. 00:30:55.985 [2024-11-28 08:29:53.148456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.985 [2024-11-28 08:29:53.148486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.985 qpair failed and we were unable to recover it. 00:30:55.985 [2024-11-28 08:29:53.148850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.985 [2024-11-28 08:29:53.148880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.985 qpair failed and we were unable to recover it. 00:30:55.985 [2024-11-28 08:29:53.149255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.985 [2024-11-28 08:29:53.149285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.985 qpair failed and we were unable to recover it. 00:30:55.985 [2024-11-28 08:29:53.149665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.985 [2024-11-28 08:29:53.149694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.985 qpair failed and we were unable to recover it. 00:30:55.985 [2024-11-28 08:29:53.150066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.985 [2024-11-28 08:29:53.150095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.985 qpair failed and we were unable to recover it. 00:30:55.985 [2024-11-28 08:29:53.150442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.985 [2024-11-28 08:29:53.150471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.985 qpair failed and we were unable to recover it. 00:30:55.985 [2024-11-28 08:29:53.150810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.985 [2024-11-28 08:29:53.150839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.985 qpair failed and we were unable to recover it. 00:30:55.985 [2024-11-28 08:29:53.151217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.985 [2024-11-28 08:29:53.151247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.985 qpair failed and we were unable to recover it. 00:30:55.985 [2024-11-28 08:29:53.151501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.985 [2024-11-28 08:29:53.151536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.985 qpair failed and we were unable to recover it. 00:30:55.985 [2024-11-28 08:29:53.151933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.985 [2024-11-28 08:29:53.151963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.985 qpair failed and we were unable to recover it. 00:30:55.985 [2024-11-28 08:29:53.152346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.985 [2024-11-28 08:29:53.152379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.985 qpair failed and we were unable to recover it. 00:30:55.985 [2024-11-28 08:29:53.152769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.985 [2024-11-28 08:29:53.152798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.985 qpair failed and we were unable to recover it. 00:30:55.985 [2024-11-28 08:29:53.153180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.985 [2024-11-28 08:29:53.153211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.985 qpair failed and we were unable to recover it. 00:30:55.985 [2024-11-28 08:29:53.153553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.985 [2024-11-28 08:29:53.153584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.985 qpair failed and we were unable to recover it. 00:30:55.985 [2024-11-28 08:29:53.153949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.985 [2024-11-28 08:29:53.153978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.985 qpair failed and we were unable to recover it. 00:30:55.985 [2024-11-28 08:29:53.154327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.985 [2024-11-28 08:29:53.154359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.985 qpair failed and we were unable to recover it. 00:30:55.985 [2024-11-28 08:29:53.154727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.985 [2024-11-28 08:29:53.154757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.985 qpair failed and we were unable to recover it. 00:30:55.985 [2024-11-28 08:29:53.155136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.985 [2024-11-28 08:29:53.155177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.985 qpair failed and we were unable to recover it. 00:30:55.985 [2024-11-28 08:29:53.155552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.985 [2024-11-28 08:29:53.155581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.985 qpair failed and we were unable to recover it. 00:30:55.985 [2024-11-28 08:29:53.155955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.985 [2024-11-28 08:29:53.155983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.985 qpair failed and we were unable to recover it. 00:30:55.985 [2024-11-28 08:29:53.156372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.985 [2024-11-28 08:29:53.156403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.985 qpair failed and we were unable to recover it. 00:30:55.985 [2024-11-28 08:29:53.156783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.985 [2024-11-28 08:29:53.156815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.985 qpair failed and we were unable to recover it. 00:30:55.985 [2024-11-28 08:29:53.157190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.985 [2024-11-28 08:29:53.157220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.985 qpair failed and we were unable to recover it. 00:30:55.985 [2024-11-28 08:29:53.157585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.985 [2024-11-28 08:29:53.157614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.985 qpair failed and we were unable to recover it. 00:30:55.985 [2024-11-28 08:29:53.157998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.985 [2024-11-28 08:29:53.158028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.985 qpair failed and we were unable to recover it. 00:30:55.985 [2024-11-28 08:29:53.158301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.985 [2024-11-28 08:29:53.158332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.985 qpair failed and we were unable to recover it. 00:30:55.985 [2024-11-28 08:29:53.158706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.985 [2024-11-28 08:29:53.158736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.985 qpair failed and we were unable to recover it. 00:30:55.985 [2024-11-28 08:29:53.159112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.985 [2024-11-28 08:29:53.159140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.985 qpair failed and we were unable to recover it. 00:30:55.985 [2024-11-28 08:29:53.159570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.985 [2024-11-28 08:29:53.159599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.985 qpair failed and we were unable to recover it. 00:30:55.985 [2024-11-28 08:29:53.159849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.985 [2024-11-28 08:29:53.159878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.985 qpair failed and we were unable to recover it. 00:30:55.985 [2024-11-28 08:29:53.160258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.985 [2024-11-28 08:29:53.160288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.985 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.160668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.160697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.161067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.161096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.161483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.161513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.161879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.161908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.162270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.162301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.162676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.162705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.163142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.163180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.163519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.163548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.163921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.163950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.164374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.164404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.164770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.164799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.165175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.165204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.165541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.165570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.165941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.165971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.166232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.166267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.166628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.166658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.167028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.167057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.167414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.167444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.167811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.167841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.168204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.168237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.168642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.168671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.168883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.168915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.169277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.169307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.169583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.169612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.169978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.170006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.170426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.170456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.170821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.170850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.171230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.171261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.171605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.171634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.171979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.172008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.172265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.172295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.172654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.172683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.173067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.173103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.173480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.173512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.173851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.173879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.174230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.174261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.174523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.174554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.174915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.986 [2024-11-28 08:29:53.174943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.986 qpair failed and we were unable to recover it. 00:30:55.986 [2024-11-28 08:29:53.175318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.175350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.175696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.175725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.176063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.176091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.176454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.176484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.176846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.176875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.177134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.177171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.177552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.177581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.177953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.177982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.178372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.178404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.178702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.178731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.179008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.179037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.179372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.179402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.179649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.179682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.179954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.179986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.180292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.180322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.180677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.180706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.181062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.181090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.181449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.181478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.181847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.181877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.182243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.182273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.182497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.182525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.182930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.182966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.183341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.183370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.183719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.183748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.184109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.184138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.184510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.184541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.184899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.184928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.185291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.185321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.185663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.185692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.186056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.186090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.186343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.186374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.186748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.186776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.187115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.187144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.187551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.187581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.188033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.188061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.188475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.188505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.188855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.188883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.189260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.987 [2024-11-28 08:29:53.189291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.987 qpair failed and we were unable to recover it. 00:30:55.987 [2024-11-28 08:29:53.189679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.189708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.190072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.190101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.190465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.190495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.190861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.190890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.191203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.191235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.191601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.191629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.191994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.192022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.192408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.192439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.192686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.192718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.193045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.193076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.193407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.193437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.193695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.193724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.194146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.194184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.194572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.194601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.195041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.195069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.195449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.195480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.195829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.195858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.196219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.196249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.196511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.196539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.196870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.196899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.197267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.197296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.197658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.197686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.198024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.198053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.198441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.198471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.198824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.198854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.199213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.199243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.199643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.199672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.200049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.200078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.200442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.200471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.200813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.200842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.201209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.201238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.201599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.201629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.202002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.202032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.202394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.202423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.202785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.202813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.203182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.203215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.203538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.203566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.203932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.203961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.988 qpair failed and we were unable to recover it. 00:30:55.988 [2024-11-28 08:29:53.204326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.988 [2024-11-28 08:29:53.204356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.989 qpair failed and we were unable to recover it. 00:30:55.989 [2024-11-28 08:29:53.204707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.989 [2024-11-28 08:29:53.204736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.989 qpair failed and we were unable to recover it. 00:30:55.989 [2024-11-28 08:29:53.205095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.989 [2024-11-28 08:29:53.205124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.989 qpair failed and we were unable to recover it. 00:30:55.989 [2024-11-28 08:29:53.205537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.989 [2024-11-28 08:29:53.205567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.989 qpair failed and we were unable to recover it. 00:30:55.989 [2024-11-28 08:29:53.205910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.989 [2024-11-28 08:29:53.205939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.989 qpair failed and we were unable to recover it. 00:30:55.989 [2024-11-28 08:29:53.206311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.989 [2024-11-28 08:29:53.206341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.989 qpair failed and we were unable to recover it. 00:30:55.989 [2024-11-28 08:29:53.206715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.989 [2024-11-28 08:29:53.206743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.989 qpair failed and we were unable to recover it. 00:30:55.989 [2024-11-28 08:29:53.207095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.989 [2024-11-28 08:29:53.207125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.989 qpair failed and we were unable to recover it. 00:30:55.989 [2024-11-28 08:29:53.207493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.989 [2024-11-28 08:29:53.207523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.989 qpair failed and we were unable to recover it. 00:30:55.989 [2024-11-28 08:29:53.207888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.989 [2024-11-28 08:29:53.207917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.989 qpair failed and we were unable to recover it. 00:30:55.989 [2024-11-28 08:29:53.208279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.989 [2024-11-28 08:29:53.208309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.989 qpair failed and we were unable to recover it. 00:30:55.989 [2024-11-28 08:29:53.208554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.989 [2024-11-28 08:29:53.208582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.989 qpair failed and we were unable to recover it. 00:30:55.989 [2024-11-28 08:29:53.208919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.989 [2024-11-28 08:29:53.208948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.989 qpair failed and we were unable to recover it. 00:30:55.989 [2024-11-28 08:29:53.209311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.989 [2024-11-28 08:29:53.209347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.989 qpair failed and we were unable to recover it. 00:30:55.989 [2024-11-28 08:29:53.209689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.989 [2024-11-28 08:29:53.209718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.989 qpair failed and we were unable to recover it. 00:30:55.989 [2024-11-28 08:29:53.210077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.989 [2024-11-28 08:29:53.210106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.989 qpair failed and we were unable to recover it. 00:30:55.989 [2024-11-28 08:29:53.210484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.989 [2024-11-28 08:29:53.210514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.989 qpair failed and we were unable to recover it. 00:30:55.989 [2024-11-28 08:29:53.210880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.989 [2024-11-28 08:29:53.210909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.989 qpair failed and we were unable to recover it. 00:30:55.989 [2024-11-28 08:29:53.211285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.989 [2024-11-28 08:29:53.211315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.989 qpair failed and we were unable to recover it. 00:30:55.989 [2024-11-28 08:29:53.211681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.989 [2024-11-28 08:29:53.211710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.989 qpair failed and we were unable to recover it. 00:30:55.989 [2024-11-28 08:29:53.212069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.989 [2024-11-28 08:29:53.212097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.989 qpair failed and we were unable to recover it. 00:30:55.989 [2024-11-28 08:29:53.212439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.989 [2024-11-28 08:29:53.212470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.989 qpair failed and we were unable to recover it. 00:30:55.989 [2024-11-28 08:29:53.212826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.989 [2024-11-28 08:29:53.212854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.989 qpair failed and we were unable to recover it. 00:30:55.989 [2024-11-28 08:29:53.213216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.989 [2024-11-28 08:29:53.213245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.989 qpair failed and we were unable to recover it. 00:30:55.989 [2024-11-28 08:29:53.213582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.989 [2024-11-28 08:29:53.213611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.989 qpair failed and we were unable to recover it. 00:30:55.989 [2024-11-28 08:29:53.213975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.989 [2024-11-28 08:29:53.214003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.989 qpair failed and we were unable to recover it. 00:30:55.989 [2024-11-28 08:29:53.214393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.989 [2024-11-28 08:29:53.214430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.989 qpair failed and we were unable to recover it. 00:30:55.989 [2024-11-28 08:29:53.214799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.989 [2024-11-28 08:29:53.214828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.989 qpair failed and we were unable to recover it. 00:30:55.989 [2024-11-28 08:29:53.215190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.989 [2024-11-28 08:29:53.215220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.989 qpair failed and we were unable to recover it. 00:30:55.989 [2024-11-28 08:29:53.215590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.989 [2024-11-28 08:29:53.215620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.989 qpair failed and we were unable to recover it. 00:30:55.989 [2024-11-28 08:29:53.215987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.989 [2024-11-28 08:29:53.216015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.989 qpair failed and we were unable to recover it. 00:30:55.989 [2024-11-28 08:29:53.216390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.989 [2024-11-28 08:29:53.216421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.989 qpair failed and we were unable to recover it. 00:30:55.989 [2024-11-28 08:29:53.216773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.989 [2024-11-28 08:29:53.216803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.217072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.217101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.217369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.217398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.217777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.217805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.218177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.218208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.218582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.218610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.218980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.219010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.219385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.219414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.219663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.219697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.220060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.220089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.220444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.220474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.220830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.220858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.221294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.221324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.221661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.221690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.222051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.222079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.222447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.222477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.222854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.222882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.223241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.223270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.223642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.223671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.224007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.224036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.224408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.224439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.224858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.224887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.225245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.225275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.225650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.225679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.225934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.225966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.226336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.226366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.226734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.226763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.227111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.227141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.227513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.227542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.227856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.227886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.228271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.228301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.228677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.228705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.229057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.229087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.229425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.229455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.229813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.229842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.230213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.230249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.230602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.230631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.231001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.231029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.990 [2024-11-28 08:29:53.231397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.990 [2024-11-28 08:29:53.231428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.990 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.231786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.231814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.232202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.232232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.232473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.232502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.232870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.232898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.233263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.233293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.233550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.233578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.234005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.234034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.234418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.234448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.234789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.234818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.235193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.235222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.235592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.235621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.235977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.236006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.236384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.236415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.236852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.236881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.237235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.237264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.237642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.237670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.238026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.238054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.238418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.238449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.238812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.238841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.239215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.239245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.239513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.239541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.239897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.239925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.240303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.240333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.240700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.240730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.241098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.241127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.241499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.241530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.241880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.241910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.242180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.242210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.242436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.242467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.242860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.242890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.243252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.243283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.243669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.243697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.244051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.244080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.244475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.244506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.244860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.244888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.245258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.245288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.245645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.245673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.246037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.991 [2024-11-28 08:29:53.246066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.991 qpair failed and we were unable to recover it. 00:30:55.991 [2024-11-28 08:29:53.246483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.246513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.246842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.246872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.247228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.247258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.247648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.247676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.247977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.248005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.248372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.248403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.248756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.248784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.249138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.249175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.249532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.249560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.249902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.249932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.250291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.250321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.250687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.250717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.251081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.251110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.251481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.251511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.251950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.251980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.252339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.252370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.252749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.252778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.253137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.253174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.253517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.253546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.253907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.253936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.254293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.254324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.254703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.254732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.255092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.255120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.255475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.255504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.255862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.255890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.256256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.256285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.256666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.256701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.257042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.257072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.257480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.257511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.257876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.257904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.258343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.258373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.258741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.258768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.259134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.259170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.259507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.259536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.259899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.259929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.260303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.260332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.260709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.260739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.261100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.261128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.992 [2024-11-28 08:29:53.261504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.992 [2024-11-28 08:29:53.261534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.992 qpair failed and we were unable to recover it. 00:30:55.993 [2024-11-28 08:29:53.261894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.993 [2024-11-28 08:29:53.261922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.993 qpair failed and we were unable to recover it. 00:30:55.993 [2024-11-28 08:29:53.262285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.993 [2024-11-28 08:29:53.262315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.993 qpair failed and we were unable to recover it. 00:30:55.993 [2024-11-28 08:29:53.262693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.993 [2024-11-28 08:29:53.262721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.993 qpair failed and we were unable to recover it. 00:30:55.993 [2024-11-28 08:29:53.263079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.993 [2024-11-28 08:29:53.263107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.993 qpair failed and we were unable to recover it. 00:30:55.993 [2024-11-28 08:29:53.263544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.993 [2024-11-28 08:29:53.263575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.993 qpair failed and we were unable to recover it. 00:30:55.993 [2024-11-28 08:29:53.263941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.993 [2024-11-28 08:29:53.263969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.993 qpair failed and we were unable to recover it. 00:30:55.993 [2024-11-28 08:29:53.264334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.993 [2024-11-28 08:29:53.264365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.993 qpair failed and we were unable to recover it. 00:30:55.993 [2024-11-28 08:29:53.264740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.993 [2024-11-28 08:29:53.264769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:55.993 qpair failed and we were unable to recover it. 00:30:56.271 [2024-11-28 08:29:53.265029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.271 [2024-11-28 08:29:53.265060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.271 qpair failed and we were unable to recover it. 00:30:56.271 [2024-11-28 08:29:53.265456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.271 [2024-11-28 08:29:53.265485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.271 qpair failed and we were unable to recover it. 00:30:56.271 [2024-11-28 08:29:53.265851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.271 [2024-11-28 08:29:53.265881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.271 qpair failed and we were unable to recover it. 00:30:56.271 [2024-11-28 08:29:53.266247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.271 [2024-11-28 08:29:53.266278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.271 qpair failed and we were unable to recover it. 00:30:56.271 [2024-11-28 08:29:53.266631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.271 [2024-11-28 08:29:53.266659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.271 qpair failed and we were unable to recover it. 00:30:56.271 [2024-11-28 08:29:53.267023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.271 [2024-11-28 08:29:53.267052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.271 qpair failed and we were unable to recover it. 00:30:56.271 [2024-11-28 08:29:53.267420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.271 [2024-11-28 08:29:53.267457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.271 qpair failed and we were unable to recover it. 00:30:56.271 [2024-11-28 08:29:53.267820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.271 [2024-11-28 08:29:53.267849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.271 qpair failed and we were unable to recover it. 00:30:56.271 [2024-11-28 08:29:53.268214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.271 [2024-11-28 08:29:53.268244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.271 qpair failed and we were unable to recover it. 00:30:56.271 [2024-11-28 08:29:53.268601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.271 [2024-11-28 08:29:53.268631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.271 qpair failed and we were unable to recover it. 00:30:56.271 [2024-11-28 08:29:53.268991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.271 [2024-11-28 08:29:53.269020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.271 qpair failed and we were unable to recover it. 00:30:56.271 [2024-11-28 08:29:53.269377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.272 [2024-11-28 08:29:53.269406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.272 qpair failed and we were unable to recover it. 00:30:56.272 [2024-11-28 08:29:53.269770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.272 [2024-11-28 08:29:53.269799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.272 qpair failed and we were unable to recover it. 00:30:56.272 [2024-11-28 08:29:53.270154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.272 [2024-11-28 08:29:53.270190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.272 qpair failed and we were unable to recover it. 00:30:56.272 [2024-11-28 08:29:53.270571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.272 [2024-11-28 08:29:53.270600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.272 qpair failed and we were unable to recover it. 00:30:56.272 [2024-11-28 08:29:53.270962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.272 [2024-11-28 08:29:53.270991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.272 qpair failed and we were unable to recover it. 00:30:56.272 [2024-11-28 08:29:53.271248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.272 [2024-11-28 08:29:53.271277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.272 qpair failed and we were unable to recover it. 00:30:56.272 [2024-11-28 08:29:53.271628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.272 [2024-11-28 08:29:53.271657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.272 qpair failed and we were unable to recover it. 00:30:56.272 [2024-11-28 08:29:53.272000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.272 [2024-11-28 08:29:53.272030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.272 qpair failed and we were unable to recover it. 00:30:56.272 [2024-11-28 08:29:53.272451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.272 [2024-11-28 08:29:53.272481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.272 qpair failed and we were unable to recover it. 00:30:56.272 [2024-11-28 08:29:53.272841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.272 [2024-11-28 08:29:53.272870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.272 qpair failed and we were unable to recover it. 00:30:56.272 [2024-11-28 08:29:53.273273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.272 [2024-11-28 08:29:53.273303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.272 qpair failed and we were unable to recover it. 00:30:56.272 [2024-11-28 08:29:53.273655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.272 [2024-11-28 08:29:53.273684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.272 qpair failed and we were unable to recover it. 00:30:56.272 [2024-11-28 08:29:53.274027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.272 [2024-11-28 08:29:53.274055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.272 qpair failed and we were unable to recover it. 00:30:56.272 [2024-11-28 08:29:53.274396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.272 [2024-11-28 08:29:53.274427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.272 qpair failed and we were unable to recover it. 00:30:56.272 [2024-11-28 08:29:53.274784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.272 [2024-11-28 08:29:53.274813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.272 qpair failed and we were unable to recover it. 00:30:56.272 [2024-11-28 08:29:53.275180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.272 [2024-11-28 08:29:53.275210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.272 qpair failed and we were unable to recover it. 00:30:56.272 [2024-11-28 08:29:53.275572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.272 [2024-11-28 08:29:53.275601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.272 qpair failed and we were unable to recover it. 00:30:56.272 [2024-11-28 08:29:53.275980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.272 [2024-11-28 08:29:53.276009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.272 qpair failed and we were unable to recover it. 00:30:56.272 [2024-11-28 08:29:53.276360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.272 [2024-11-28 08:29:53.276390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.272 qpair failed and we were unable to recover it. 00:30:56.272 [2024-11-28 08:29:53.276749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.272 [2024-11-28 08:29:53.276777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.272 qpair failed and we were unable to recover it. 00:30:56.272 [2024-11-28 08:29:53.277143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.272 [2024-11-28 08:29:53.277190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.272 qpair failed and we were unable to recover it. 00:30:56.272 [2024-11-28 08:29:53.277544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.272 [2024-11-28 08:29:53.277573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.272 qpair failed and we were unable to recover it. 00:30:56.272 [2024-11-28 08:29:53.277841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.272 [2024-11-28 08:29:53.277871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.272 qpair failed and we were unable to recover it. 00:30:56.272 [2024-11-28 08:29:53.278228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.272 [2024-11-28 08:29:53.278258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.272 qpair failed and we were unable to recover it. 00:30:56.272 [2024-11-28 08:29:53.278642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.272 [2024-11-28 08:29:53.278671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.272 qpair failed and we were unable to recover it. 00:30:56.272 [2024-11-28 08:29:53.279016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.272 [2024-11-28 08:29:53.279046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.272 qpair failed and we were unable to recover it. 00:30:56.272 [2024-11-28 08:29:53.279417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.272 [2024-11-28 08:29:53.279447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.272 qpair failed and we were unable to recover it. 00:30:56.272 [2024-11-28 08:29:53.279811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.272 [2024-11-28 08:29:53.279840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.272 qpair failed and we were unable to recover it. 00:30:56.272 [2024-11-28 08:29:53.280188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.273 [2024-11-28 08:29:53.280218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.273 qpair failed and we were unable to recover it. 00:30:56.273 [2024-11-28 08:29:53.280591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.273 [2024-11-28 08:29:53.280620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.273 qpair failed and we were unable to recover it. 00:30:56.273 [2024-11-28 08:29:53.280976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.273 [2024-11-28 08:29:53.281004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.273 qpair failed and we were unable to recover it. 00:30:56.273 [2024-11-28 08:29:53.281374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.273 [2024-11-28 08:29:53.281404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.273 qpair failed and we were unable to recover it. 00:30:56.273 [2024-11-28 08:29:53.281761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.273 [2024-11-28 08:29:53.281789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.273 qpair failed and we were unable to recover it. 00:30:56.273 [2024-11-28 08:29:53.282156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.273 [2024-11-28 08:29:53.282193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.273 qpair failed and we were unable to recover it. 00:30:56.273 [2024-11-28 08:29:53.282550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.273 [2024-11-28 08:29:53.282579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.273 qpair failed and we were unable to recover it. 00:30:56.273 [2024-11-28 08:29:53.282941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.273 [2024-11-28 08:29:53.282970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.273 qpair failed and we were unable to recover it. 00:30:56.273 [2024-11-28 08:29:53.283331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.273 [2024-11-28 08:29:53.283361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.273 qpair failed and we were unable to recover it. 00:30:56.273 [2024-11-28 08:29:53.283790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.273 [2024-11-28 08:29:53.283819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.273 qpair failed and we were unable to recover it. 00:30:56.273 [2024-11-28 08:29:53.284150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.273 [2024-11-28 08:29:53.284191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.273 qpair failed and we were unable to recover it. 00:30:56.273 [2024-11-28 08:29:53.284565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.273 [2024-11-28 08:29:53.284594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.273 qpair failed and we were unable to recover it. 00:30:56.273 [2024-11-28 08:29:53.284971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.273 [2024-11-28 08:29:53.285000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.273 qpair failed and we were unable to recover it. 00:30:56.273 [2024-11-28 08:29:53.285376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.273 [2024-11-28 08:29:53.285407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.273 qpair failed and we were unable to recover it. 00:30:56.273 [2024-11-28 08:29:53.285762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.273 [2024-11-28 08:29:53.285792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.273 qpair failed and we were unable to recover it. 00:30:56.273 [2024-11-28 08:29:53.286147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.273 [2024-11-28 08:29:53.286187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.273 qpair failed and we were unable to recover it. 00:30:56.273 [2024-11-28 08:29:53.286549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.273 [2024-11-28 08:29:53.286579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.273 qpair failed and we were unable to recover it. 00:30:56.273 [2024-11-28 08:29:53.286943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.273 [2024-11-28 08:29:53.286972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.273 qpair failed and we were unable to recover it. 00:30:56.273 [2024-11-28 08:29:53.287334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.273 [2024-11-28 08:29:53.287363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.273 qpair failed and we were unable to recover it. 00:30:56.273 [2024-11-28 08:29:53.287738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.273 [2024-11-28 08:29:53.287767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.273 qpair failed and we were unable to recover it. 00:30:56.273 [2024-11-28 08:29:53.288131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.273 [2024-11-28 08:29:53.288166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.273 qpair failed and we were unable to recover it. 00:30:56.273 [2024-11-28 08:29:53.288589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.273 [2024-11-28 08:29:53.288618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.273 qpair failed and we were unable to recover it. 00:30:56.273 [2024-11-28 08:29:53.288860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.273 [2024-11-28 08:29:53.288891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.273 qpair failed and we were unable to recover it. 00:30:56.273 [2024-11-28 08:29:53.289271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.273 [2024-11-28 08:29:53.289303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.273 qpair failed and we were unable to recover it. 00:30:56.273 [2024-11-28 08:29:53.289650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.273 [2024-11-28 08:29:53.289681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.273 qpair failed and we were unable to recover it. 00:30:56.273 [2024-11-28 08:29:53.290046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.273 [2024-11-28 08:29:53.290075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.273 qpair failed and we were unable to recover it. 00:30:56.273 [2024-11-28 08:29:53.290420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.273 [2024-11-28 08:29:53.290450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.273 qpair failed and we were unable to recover it. 00:30:56.273 [2024-11-28 08:29:53.290804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.273 [2024-11-28 08:29:53.290832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.273 qpair failed and we were unable to recover it. 00:30:56.273 [2024-11-28 08:29:53.291086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.273 [2024-11-28 08:29:53.291115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.273 qpair failed and we were unable to recover it. 00:30:56.273 [2024-11-28 08:29:53.291468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.274 [2024-11-28 08:29:53.291498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.274 qpair failed and we were unable to recover it. 00:30:56.274 [2024-11-28 08:29:53.291822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.274 [2024-11-28 08:29:53.291850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.274 qpair failed and we were unable to recover it. 00:30:56.274 [2024-11-28 08:29:53.292221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.274 [2024-11-28 08:29:53.292251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.274 qpair failed and we were unable to recover it. 00:30:56.274 [2024-11-28 08:29:53.292618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.274 [2024-11-28 08:29:53.292647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.274 qpair failed and we were unable to recover it. 00:30:56.274 [2024-11-28 08:29:53.292999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.274 [2024-11-28 08:29:53.293029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.274 qpair failed and we were unable to recover it. 00:30:56.274 [2024-11-28 08:29:53.293404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.274 [2024-11-28 08:29:53.293434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.274 qpair failed and we were unable to recover it. 00:30:56.274 [2024-11-28 08:29:53.293796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.274 [2024-11-28 08:29:53.293832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.274 qpair failed and we were unable to recover it. 00:30:56.274 [2024-11-28 08:29:53.294231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.274 [2024-11-28 08:29:53.294261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.274 qpair failed and we were unable to recover it. 00:30:56.274 [2024-11-28 08:29:53.294515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.274 [2024-11-28 08:29:53.294547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.274 qpair failed and we were unable to recover it. 00:30:56.274 [2024-11-28 08:29:53.294935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.274 [2024-11-28 08:29:53.294964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.274 qpair failed and we were unable to recover it. 00:30:56.274 [2024-11-28 08:29:53.295318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.274 [2024-11-28 08:29:53.295349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.274 qpair failed and we were unable to recover it. 00:30:56.274 [2024-11-28 08:29:53.295699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.274 [2024-11-28 08:29:53.295727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.274 qpair failed and we were unable to recover it. 00:30:56.274 [2024-11-28 08:29:53.296069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.274 [2024-11-28 08:29:53.296098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.274 qpair failed and we were unable to recover it. 00:30:56.274 [2024-11-28 08:29:53.296462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.274 [2024-11-28 08:29:53.296492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.274 qpair failed and we were unable to recover it. 00:30:56.274 [2024-11-28 08:29:53.296861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.274 [2024-11-28 08:29:53.296890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.274 qpair failed and we were unable to recover it. 00:30:56.274 [2024-11-28 08:29:53.297230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.274 [2024-11-28 08:29:53.297259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.274 qpair failed and we were unable to recover it. 00:30:56.274 [2024-11-28 08:29:53.297678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.274 [2024-11-28 08:29:53.297706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.274 qpair failed and we were unable to recover it. 00:30:56.274 [2024-11-28 08:29:53.297963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.274 [2024-11-28 08:29:53.297991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.274 qpair failed and we were unable to recover it. 00:30:56.274 [2024-11-28 08:29:53.298341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.274 [2024-11-28 08:29:53.298371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.274 qpair failed and we were unable to recover it. 00:30:56.274 [2024-11-28 08:29:53.298743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.274 [2024-11-28 08:29:53.298772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.274 qpair failed and we were unable to recover it. 00:30:56.274 [2024-11-28 08:29:53.299139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.274 [2024-11-28 08:29:53.299176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.274 qpair failed and we were unable to recover it. 00:30:56.274 [2024-11-28 08:29:53.299536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.274 [2024-11-28 08:29:53.299564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.274 qpair failed and we were unable to recover it. 00:30:56.274 [2024-11-28 08:29:53.299929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.274 [2024-11-28 08:29:53.299957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.274 qpair failed and we were unable to recover it. 00:30:56.274 [2024-11-28 08:29:53.300319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.274 [2024-11-28 08:29:53.300349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.274 qpair failed and we were unable to recover it. 00:30:56.274 [2024-11-28 08:29:53.300690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.274 [2024-11-28 08:29:53.300719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.274 qpair failed and we were unable to recover it. 00:30:56.274 [2024-11-28 08:29:53.301093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.274 [2024-11-28 08:29:53.301121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.274 qpair failed and we were unable to recover it. 00:30:56.274 [2024-11-28 08:29:53.301537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.274 [2024-11-28 08:29:53.301568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.274 qpair failed and we were unable to recover it. 00:30:56.274 [2024-11-28 08:29:53.301916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.274 [2024-11-28 08:29:53.301946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.274 qpair failed and we were unable to recover it. 00:30:56.274 [2024-11-28 08:29:53.302310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.274 [2024-11-28 08:29:53.302341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.274 qpair failed and we were unable to recover it. 00:30:56.274 [2024-11-28 08:29:53.302680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.274 [2024-11-28 08:29:53.302711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.274 qpair failed and we were unable to recover it. 00:30:56.274 [2024-11-28 08:29:53.303072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.274 [2024-11-28 08:29:53.303100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.274 qpair failed and we were unable to recover it. 00:30:56.274 [2024-11-28 08:29:53.303472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.274 [2024-11-28 08:29:53.303502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.275 qpair failed and we were unable to recover it. 00:30:56.275 [2024-11-28 08:29:53.303736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.275 [2024-11-28 08:29:53.303765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.275 qpair failed and we were unable to recover it. 00:30:56.275 [2024-11-28 08:29:53.304132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.275 [2024-11-28 08:29:53.304182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.275 qpair failed and we were unable to recover it. 00:30:56.275 [2024-11-28 08:29:53.304539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.275 [2024-11-28 08:29:53.304569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.275 qpair failed and we were unable to recover it. 00:30:56.275 [2024-11-28 08:29:53.304922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.275 [2024-11-28 08:29:53.304951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.275 qpair failed and we were unable to recover it. 00:30:56.275 [2024-11-28 08:29:53.305326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.275 [2024-11-28 08:29:53.305359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.275 qpair failed and we were unable to recover it. 00:30:56.275 [2024-11-28 08:29:53.305720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.275 [2024-11-28 08:29:53.305749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.275 qpair failed and we were unable to recover it. 00:30:56.275 [2024-11-28 08:29:53.306181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.275 [2024-11-28 08:29:53.306211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.275 qpair failed and we were unable to recover it. 00:30:56.275 [2024-11-28 08:29:53.306625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.275 [2024-11-28 08:29:53.306654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.275 qpair failed and we were unable to recover it. 00:30:56.275 [2024-11-28 08:29:53.307007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.275 [2024-11-28 08:29:53.307037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.275 qpair failed and we were unable to recover it. 00:30:56.275 [2024-11-28 08:29:53.307398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.275 [2024-11-28 08:29:53.307429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.275 qpair failed and we were unable to recover it. 00:30:56.275 [2024-11-28 08:29:53.307793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.275 [2024-11-28 08:29:53.307822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.275 qpair failed and we were unable to recover it. 00:30:56.275 [2024-11-28 08:29:53.308185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.275 [2024-11-28 08:29:53.308215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.275 qpair failed and we were unable to recover it. 00:30:56.275 [2024-11-28 08:29:53.308597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.275 [2024-11-28 08:29:53.308626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.275 qpair failed and we were unable to recover it. 00:30:56.275 [2024-11-28 08:29:53.308983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.275 [2024-11-28 08:29:53.309012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.275 qpair failed and we were unable to recover it. 00:30:56.275 [2024-11-28 08:29:53.309394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.275 [2024-11-28 08:29:53.309425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.275 qpair failed and we were unable to recover it. 00:30:56.275 [2024-11-28 08:29:53.309785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.275 [2024-11-28 08:29:53.309814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.275 qpair failed and we were unable to recover it. 00:30:56.275 [2024-11-28 08:29:53.310176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.275 [2024-11-28 08:29:53.310206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.275 qpair failed and we were unable to recover it. 00:30:56.275 [2024-11-28 08:29:53.310564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.275 [2024-11-28 08:29:53.310594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.275 qpair failed and we were unable to recover it. 00:30:56.275 [2024-11-28 08:29:53.310956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.275 [2024-11-28 08:29:53.310985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.275 qpair failed and we were unable to recover it. 00:30:56.275 [2024-11-28 08:29:53.311425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.275 [2024-11-28 08:29:53.311457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.275 qpair failed and we were unable to recover it. 00:30:56.275 [2024-11-28 08:29:53.311869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.275 [2024-11-28 08:29:53.311897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.275 qpair failed and we were unable to recover it. 00:30:56.275 [2024-11-28 08:29:53.312226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.275 [2024-11-28 08:29:53.312257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.275 qpair failed and we were unable to recover it. 00:30:56.275 [2024-11-28 08:29:53.312628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.275 [2024-11-28 08:29:53.312657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.275 qpair failed and we were unable to recover it. 00:30:56.275 [2024-11-28 08:29:53.313020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.275 [2024-11-28 08:29:53.313048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.275 qpair failed and we were unable to recover it. 00:30:56.275 [2024-11-28 08:29:53.313486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.275 [2024-11-28 08:29:53.313515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.275 qpair failed and we were unable to recover it. 00:30:56.275 [2024-11-28 08:29:53.313872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.275 [2024-11-28 08:29:53.313901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.275 qpair failed and we were unable to recover it. 00:30:56.275 [2024-11-28 08:29:53.314269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.275 [2024-11-28 08:29:53.314298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.275 qpair failed and we were unable to recover it. 00:30:56.275 [2024-11-28 08:29:53.314670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.275 [2024-11-28 08:29:53.314699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.275 qpair failed and we were unable to recover it. 00:30:56.275 [2024-11-28 08:29:53.315050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.275 [2024-11-28 08:29:53.315086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.275 qpair failed and we were unable to recover it. 00:30:56.275 [2024-11-28 08:29:53.315480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.275 [2024-11-28 08:29:53.315510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.275 qpair failed and we were unable to recover it. 00:30:56.275 [2024-11-28 08:29:53.315868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.275 [2024-11-28 08:29:53.315897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.275 qpair failed and we were unable to recover it. 00:30:56.275 [2024-11-28 08:29:53.316263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.276 [2024-11-28 08:29:53.316293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.276 qpair failed and we were unable to recover it. 00:30:56.276 [2024-11-28 08:29:53.316661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.276 [2024-11-28 08:29:53.316689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.276 qpair failed and we were unable to recover it. 00:30:56.276 [2024-11-28 08:29:53.317053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.276 [2024-11-28 08:29:53.317082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.276 qpair failed and we were unable to recover it. 00:30:56.276 [2024-11-28 08:29:53.317447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.276 [2024-11-28 08:29:53.317476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.276 qpair failed and we were unable to recover it. 00:30:56.276 [2024-11-28 08:29:53.317743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.276 [2024-11-28 08:29:53.317771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.276 qpair failed and we were unable to recover it. 00:30:56.276 [2024-11-28 08:29:53.318154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.276 [2024-11-28 08:29:53.318192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.276 qpair failed and we were unable to recover it. 00:30:56.276 [2024-11-28 08:29:53.318550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.276 [2024-11-28 08:29:53.318581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.276 qpair failed and we were unable to recover it. 00:30:56.276 [2024-11-28 08:29:53.318936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.276 [2024-11-28 08:29:53.318964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.276 qpair failed and we were unable to recover it. 00:30:56.276 [2024-11-28 08:29:53.319327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.276 [2024-11-28 08:29:53.319358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.276 qpair failed and we were unable to recover it. 00:30:56.276 [2024-11-28 08:29:53.319714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.276 [2024-11-28 08:29:53.319742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.276 qpair failed and we were unable to recover it. 00:30:56.276 [2024-11-28 08:29:53.320108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.276 [2024-11-28 08:29:53.320138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.276 qpair failed and we were unable to recover it. 00:30:56.276 [2024-11-28 08:29:53.320519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.276 [2024-11-28 08:29:53.320550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.276 qpair failed and we were unable to recover it. 00:30:56.276 [2024-11-28 08:29:53.320897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.276 [2024-11-28 08:29:53.320927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.276 qpair failed and we were unable to recover it. 00:30:56.276 [2024-11-28 08:29:53.321298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.276 [2024-11-28 08:29:53.321327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.276 qpair failed and we were unable to recover it. 00:30:56.276 [2024-11-28 08:29:53.321700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.276 [2024-11-28 08:29:53.321729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.276 qpair failed and we were unable to recover it. 00:30:56.276 [2024-11-28 08:29:53.322085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.276 [2024-11-28 08:29:53.322115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.276 qpair failed and we were unable to recover it. 00:30:56.276 [2024-11-28 08:29:53.322463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.276 [2024-11-28 08:29:53.322493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.276 qpair failed and we were unable to recover it. 00:30:56.276 [2024-11-28 08:29:53.322844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.276 [2024-11-28 08:29:53.322872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.276 qpair failed and we were unable to recover it. 00:30:56.276 [2024-11-28 08:29:53.323231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.276 [2024-11-28 08:29:53.323261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.276 qpair failed and we were unable to recover it. 00:30:56.276 [2024-11-28 08:29:53.323611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.276 [2024-11-28 08:29:53.323640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.276 qpair failed and we were unable to recover it. 00:30:56.276 [2024-11-28 08:29:53.324008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.276 [2024-11-28 08:29:53.324037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.276 qpair failed and we were unable to recover it. 00:30:56.276 [2024-11-28 08:29:53.324401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.276 [2024-11-28 08:29:53.324433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.276 qpair failed and we were unable to recover it. 00:30:56.276 [2024-11-28 08:29:53.324792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.276 [2024-11-28 08:29:53.324820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.276 qpair failed and we were unable to recover it. 00:30:56.276 [2024-11-28 08:29:53.325260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.276 [2024-11-28 08:29:53.325290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.276 qpair failed and we were unable to recover it. 00:30:56.276 [2024-11-28 08:29:53.325647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.276 [2024-11-28 08:29:53.325676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.276 qpair failed and we were unable to recover it. 00:30:56.276 [2024-11-28 08:29:53.326037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.276 [2024-11-28 08:29:53.326067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.276 qpair failed and we were unable to recover it. 00:30:56.276 [2024-11-28 08:29:53.326283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.276 [2024-11-28 08:29:53.326316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.276 qpair failed and we were unable to recover it. 00:30:56.276 [2024-11-28 08:29:53.326696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.276 [2024-11-28 08:29:53.326735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.276 qpair failed and we were unable to recover it. 00:30:56.276 [2024-11-28 08:29:53.327093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.276 [2024-11-28 08:29:53.327122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.276 qpair failed and we were unable to recover it. 00:30:56.276 [2024-11-28 08:29:53.327301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.276 [2024-11-28 08:29:53.327332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.277 qpair failed and we were unable to recover it. 00:30:56.277 [2024-11-28 08:29:53.327696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.277 [2024-11-28 08:29:53.327726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.277 qpair failed and we were unable to recover it. 00:30:56.277 [2024-11-28 08:29:53.328087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.277 [2024-11-28 08:29:53.328116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.277 qpair failed and we were unable to recover it. 00:30:56.277 [2024-11-28 08:29:53.328498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.277 [2024-11-28 08:29:53.328529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.277 qpair failed and we were unable to recover it. 00:30:56.277 [2024-11-28 08:29:53.328904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.277 [2024-11-28 08:29:53.328934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.277 qpair failed and we were unable to recover it. 00:30:56.277 [2024-11-28 08:29:53.329291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.277 [2024-11-28 08:29:53.329322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.277 qpair failed and we were unable to recover it. 00:30:56.277 [2024-11-28 08:29:53.329691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.277 [2024-11-28 08:29:53.329720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.277 qpair failed and we were unable to recover it. 00:30:56.277 [2024-11-28 08:29:53.329969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.277 [2024-11-28 08:29:53.329997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.277 qpair failed and we were unable to recover it. 00:30:56.277 [2024-11-28 08:29:53.330272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.277 [2024-11-28 08:29:53.330302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.277 qpair failed and we were unable to recover it. 00:30:56.277 [2024-11-28 08:29:53.330691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.277 [2024-11-28 08:29:53.330720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.277 qpair failed and we were unable to recover it. 00:30:56.277 [2024-11-28 08:29:53.330957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.277 [2024-11-28 08:29:53.330988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.277 qpair failed and we were unable to recover it. 00:30:56.277 [2024-11-28 08:29:53.331342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.277 [2024-11-28 08:29:53.331372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.277 qpair failed and we were unable to recover it. 00:30:56.277 [2024-11-28 08:29:53.331737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.277 [2024-11-28 08:29:53.331766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.277 qpair failed and we were unable to recover it. 00:30:56.277 [2024-11-28 08:29:53.332096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.277 [2024-11-28 08:29:53.332124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.277 qpair failed and we were unable to recover it. 00:30:56.277 [2024-11-28 08:29:53.332517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.277 [2024-11-28 08:29:53.332547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.277 qpair failed and we were unable to recover it. 00:30:56.277 [2024-11-28 08:29:53.332896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.277 [2024-11-28 08:29:53.332925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.277 qpair failed and we were unable to recover it. 00:30:56.277 [2024-11-28 08:29:53.333184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.277 [2024-11-28 08:29:53.333216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.277 qpair failed and we were unable to recover it. 00:30:56.277 [2024-11-28 08:29:53.333579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.277 [2024-11-28 08:29:53.333609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.277 qpair failed and we were unable to recover it. 00:30:56.277 [2024-11-28 08:29:53.333972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.277 [2024-11-28 08:29:53.334001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.277 qpair failed and we were unable to recover it. 00:30:56.277 [2024-11-28 08:29:53.334362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.277 [2024-11-28 08:29:53.334392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.277 qpair failed and we were unable to recover it. 00:30:56.277 [2024-11-28 08:29:53.334761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.277 [2024-11-28 08:29:53.334790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.277 qpair failed and we were unable to recover it. 00:30:56.277 [2024-11-28 08:29:53.335140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.277 [2024-11-28 08:29:53.335175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.277 qpair failed and we were unable to recover it. 00:30:56.277 [2024-11-28 08:29:53.335513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.277 [2024-11-28 08:29:53.335544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.277 qpair failed and we were unable to recover it. 00:30:56.278 [2024-11-28 08:29:53.335902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-11-28 08:29:53.335932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.278 qpair failed and we were unable to recover it. 00:30:56.278 [2024-11-28 08:29:53.336299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-11-28 08:29:53.336329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.278 qpair failed and we were unable to recover it. 00:30:56.278 [2024-11-28 08:29:53.336697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-11-28 08:29:53.336726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.278 qpair failed and we were unable to recover it. 00:30:56.278 [2024-11-28 08:29:53.337147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-11-28 08:29:53.337187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.278 qpair failed and we were unable to recover it. 00:30:56.278 [2024-11-28 08:29:53.337564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-11-28 08:29:53.337593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.278 qpair failed and we were unable to recover it. 00:30:56.278 [2024-11-28 08:29:53.337957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-11-28 08:29:53.337985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.278 qpair failed and we were unable to recover it. 00:30:56.278 [2024-11-28 08:29:53.338275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-11-28 08:29:53.338305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.278 qpair failed and we were unable to recover it. 00:30:56.278 [2024-11-28 08:29:53.338663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-11-28 08:29:53.338692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.278 qpair failed and we were unable to recover it. 00:30:56.278 [2024-11-28 08:29:53.338937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-11-28 08:29:53.338970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.278 qpair failed and we were unable to recover it. 00:30:56.278 [2024-11-28 08:29:53.339204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-11-28 08:29:53.339235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.278 qpair failed and we were unable to recover it. 00:30:56.278 [2024-11-28 08:29:53.339638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-11-28 08:29:53.339668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.278 qpair failed and we were unable to recover it. 00:30:56.278 [2024-11-28 08:29:53.340046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-11-28 08:29:53.340075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.278 qpair failed and we were unable to recover it. 00:30:56.278 [2024-11-28 08:29:53.340412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-11-28 08:29:53.340442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.278 qpair failed and we were unable to recover it. 00:30:56.278 [2024-11-28 08:29:53.340804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-11-28 08:29:53.340840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.278 qpair failed and we were unable to recover it. 00:30:56.278 [2024-11-28 08:29:53.341196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-11-28 08:29:53.341226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.278 qpair failed and we were unable to recover it. 00:30:56.278 [2024-11-28 08:29:53.341585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-11-28 08:29:53.341614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.278 qpair failed and we were unable to recover it. 00:30:56.278 [2024-11-28 08:29:53.341981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-11-28 08:29:53.342009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.278 qpair failed and we were unable to recover it. 00:30:56.278 [2024-11-28 08:29:53.342340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-11-28 08:29:53.342370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.278 qpair failed and we were unable to recover it. 00:30:56.278 [2024-11-28 08:29:53.342701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-11-28 08:29:53.342730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.278 qpair failed and we were unable to recover it. 00:30:56.278 [2024-11-28 08:29:53.343103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-11-28 08:29:53.343132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.278 qpair failed and we were unable to recover it. 00:30:56.278 [2024-11-28 08:29:53.343557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-11-28 08:29:53.343587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.278 qpair failed and we were unable to recover it. 00:30:56.278 [2024-11-28 08:29:53.343931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-11-28 08:29:53.343960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.278 qpair failed and we were unable to recover it. 00:30:56.278 [2024-11-28 08:29:53.344328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-11-28 08:29:53.344359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.278 qpair failed and we were unable to recover it. 00:30:56.278 [2024-11-28 08:29:53.344748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-11-28 08:29:53.344777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.278 qpair failed and we were unable to recover it. 00:30:56.278 [2024-11-28 08:29:53.345134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-11-28 08:29:53.345172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.278 qpair failed and we were unable to recover it. 00:30:56.278 [2024-11-28 08:29:53.345518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-11-28 08:29:53.345547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.278 qpair failed and we were unable to recover it. 00:30:56.278 [2024-11-28 08:29:53.345846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-11-28 08:29:53.345876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.278 qpair failed and we were unable to recover it. 00:30:56.278 [2024-11-28 08:29:53.346241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-11-28 08:29:53.346272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.278 qpair failed and we were unable to recover it. 00:30:56.278 [2024-11-28 08:29:53.346647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-11-28 08:29:53.346676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.278 qpair failed and we were unable to recover it. 00:30:56.278 [2024-11-28 08:29:53.347045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.278 [2024-11-28 08:29:53.347073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.279 qpair failed and we were unable to recover it. 00:30:56.279 [2024-11-28 08:29:53.347418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-11-28 08:29:53.347448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.279 qpair failed and we were unable to recover it. 00:30:56.279 [2024-11-28 08:29:53.347810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-11-28 08:29:53.347840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.279 qpair failed and we were unable to recover it. 00:30:56.279 [2024-11-28 08:29:53.348190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-11-28 08:29:53.348220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.279 qpair failed and we were unable to recover it. 00:30:56.279 [2024-11-28 08:29:53.348447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-11-28 08:29:53.348476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.279 qpair failed and we were unable to recover it. 00:30:56.279 [2024-11-28 08:29:53.348820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-11-28 08:29:53.348850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.279 qpair failed and we were unable to recover it. 00:30:56.279 [2024-11-28 08:29:53.349218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-11-28 08:29:53.349247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.279 qpair failed and we were unable to recover it. 00:30:56.279 [2024-11-28 08:29:53.349620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-11-28 08:29:53.349649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.279 qpair failed and we were unable to recover it. 00:30:56.279 [2024-11-28 08:29:53.349995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-11-28 08:29:53.350025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.279 qpair failed and we were unable to recover it. 00:30:56.279 [2024-11-28 08:29:53.350423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-11-28 08:29:53.350453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.279 qpair failed and we were unable to recover it. 00:30:56.279 [2024-11-28 08:29:53.350853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-11-28 08:29:53.350882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.279 qpair failed and we were unable to recover it. 00:30:56.279 [2024-11-28 08:29:53.351257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-11-28 08:29:53.351294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.279 qpair failed and we were unable to recover it. 00:30:56.279 [2024-11-28 08:29:53.351649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-11-28 08:29:53.351679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.279 qpair failed and we were unable to recover it. 00:30:56.279 [2024-11-28 08:29:53.351913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-11-28 08:29:53.351942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.279 qpair failed and we were unable to recover it. 00:30:56.279 [2024-11-28 08:29:53.352318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-11-28 08:29:53.352348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.279 qpair failed and we were unable to recover it. 00:30:56.279 [2024-11-28 08:29:53.352704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-11-28 08:29:53.352733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.279 qpair failed and we were unable to recover it. 00:30:56.279 [2024-11-28 08:29:53.353095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-11-28 08:29:53.353124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.279 qpair failed and we were unable to recover it. 00:30:56.279 [2024-11-28 08:29:53.353486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-11-28 08:29:53.353516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.279 qpair failed and we were unable to recover it. 00:30:56.279 [2024-11-28 08:29:53.353879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-11-28 08:29:53.353908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.279 qpair failed and we were unable to recover it. 00:30:56.279 [2024-11-28 08:29:53.354248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-11-28 08:29:53.354279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.279 qpair failed and we were unable to recover it. 00:30:56.279 [2024-11-28 08:29:53.354559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-11-28 08:29:53.354589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.279 qpair failed and we were unable to recover it. 00:30:56.279 [2024-11-28 08:29:53.354936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-11-28 08:29:53.354966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.279 qpair failed and we were unable to recover it. 00:30:56.279 [2024-11-28 08:29:53.355327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-11-28 08:29:53.355356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.279 qpair failed and we were unable to recover it. 00:30:56.279 [2024-11-28 08:29:53.355654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-11-28 08:29:53.355683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.279 qpair failed and we were unable to recover it. 00:30:56.279 [2024-11-28 08:29:53.356042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-11-28 08:29:53.356072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.279 qpair failed and we were unable to recover it. 00:30:56.279 [2024-11-28 08:29:53.356408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-11-28 08:29:53.356439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.279 qpair failed and we were unable to recover it. 00:30:56.279 [2024-11-28 08:29:53.356796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-11-28 08:29:53.356826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.279 qpair failed and we were unable to recover it. 00:30:56.279 [2024-11-28 08:29:53.357190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-11-28 08:29:53.357220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.279 qpair failed and we were unable to recover it. 00:30:56.279 [2024-11-28 08:29:53.357467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-11-28 08:29:53.357499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.279 qpair failed and we were unable to recover it. 00:30:56.279 [2024-11-28 08:29:53.357860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-11-28 08:29:53.357889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.279 qpair failed and we were unable to recover it. 00:30:56.279 [2024-11-28 08:29:53.358247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.279 [2024-11-28 08:29:53.358277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.279 qpair failed and we were unable to recover it. 00:30:56.279 [2024-11-28 08:29:53.358649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.280 [2024-11-28 08:29:53.358677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.280 qpair failed and we were unable to recover it. 00:30:56.280 [2024-11-28 08:29:53.359065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.280 [2024-11-28 08:29:53.359094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.280 qpair failed and we were unable to recover it. 00:30:56.280 [2024-11-28 08:29:53.359428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.280 [2024-11-28 08:29:53.359458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.280 qpair failed and we were unable to recover it. 00:30:56.280 [2024-11-28 08:29:53.359836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.280 [2024-11-28 08:29:53.359865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.280 qpair failed and we were unable to recover it. 00:30:56.280 [2024-11-28 08:29:53.360243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.280 [2024-11-28 08:29:53.360273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.280 qpair failed and we were unable to recover it. 00:30:56.280 [2024-11-28 08:29:53.360643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.280 [2024-11-28 08:29:53.360672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.280 qpair failed and we were unable to recover it. 00:30:56.280 [2024-11-28 08:29:53.360911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.280 [2024-11-28 08:29:53.360943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.280 qpair failed and we were unable to recover it. 00:30:56.280 [2024-11-28 08:29:53.361346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.280 [2024-11-28 08:29:53.361377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.280 qpair failed and we were unable to recover it. 00:30:56.280 [2024-11-28 08:29:53.361735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.280 [2024-11-28 08:29:53.361765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.280 qpair failed and we were unable to recover it. 00:30:56.280 [2024-11-28 08:29:53.362129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.280 [2024-11-28 08:29:53.362168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.280 qpair failed and we were unable to recover it. 00:30:56.280 [2024-11-28 08:29:53.362530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.280 [2024-11-28 08:29:53.362559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.280 qpair failed and we were unable to recover it. 00:30:56.280 [2024-11-28 08:29:53.362913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.280 [2024-11-28 08:29:53.362943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.280 qpair failed and we were unable to recover it. 00:30:56.280 [2024-11-28 08:29:53.363306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.280 [2024-11-28 08:29:53.363335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.280 qpair failed and we were unable to recover it. 00:30:56.280 [2024-11-28 08:29:53.363678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.280 [2024-11-28 08:29:53.363709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.280 qpair failed and we were unable to recover it. 00:30:56.280 [2024-11-28 08:29:53.364055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.280 [2024-11-28 08:29:53.364085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.280 qpair failed and we were unable to recover it. 00:30:56.280 [2024-11-28 08:29:53.364334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.280 [2024-11-28 08:29:53.364368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.280 qpair failed and we were unable to recover it. 00:30:56.280 [2024-11-28 08:29:53.364745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.280 [2024-11-28 08:29:53.364775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.280 qpair failed and we were unable to recover it. 00:30:56.280 [2024-11-28 08:29:53.365212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.280 [2024-11-28 08:29:53.365242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.280 qpair failed and we were unable to recover it. 00:30:56.280 [2024-11-28 08:29:53.365508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.280 [2024-11-28 08:29:53.365536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.280 qpair failed and we were unable to recover it. 00:30:56.280 [2024-11-28 08:29:53.365911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.280 [2024-11-28 08:29:53.365940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.280 qpair failed and we were unable to recover it. 00:30:56.280 [2024-11-28 08:29:53.366312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.280 [2024-11-28 08:29:53.366342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.280 qpair failed and we were unable to recover it. 00:30:56.280 [2024-11-28 08:29:53.366714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.280 [2024-11-28 08:29:53.366750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.280 qpair failed and we were unable to recover it. 00:30:56.280 [2024-11-28 08:29:53.367088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.280 [2024-11-28 08:29:53.367117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.280 qpair failed and we were unable to recover it. 00:30:56.280 [2024-11-28 08:29:53.367479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.280 [2024-11-28 08:29:53.367509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.280 qpair failed and we were unable to recover it. 00:30:56.280 [2024-11-28 08:29:53.367866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.280 [2024-11-28 08:29:53.367893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.280 qpair failed and we were unable to recover it. 00:30:56.280 [2024-11-28 08:29:53.368266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.280 [2024-11-28 08:29:53.368297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.280 qpair failed and we were unable to recover it. 00:30:56.280 [2024-11-28 08:29:53.368661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.280 [2024-11-28 08:29:53.368690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.280 qpair failed and we were unable to recover it. 00:30:56.280 [2024-11-28 08:29:53.369047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.280 [2024-11-28 08:29:53.369077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.280 qpair failed and we were unable to recover it. 00:30:56.280 [2024-11-28 08:29:53.369435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.280 [2024-11-28 08:29:53.369466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.280 qpair failed and we were unable to recover it. 00:30:56.280 [2024-11-28 08:29:53.369824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.280 [2024-11-28 08:29:53.369853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.280 qpair failed and we were unable to recover it. 00:30:56.280 [2024-11-28 08:29:53.370208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.280 [2024-11-28 08:29:53.370239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.280 qpair failed and we were unable to recover it. 00:30:56.281 [2024-11-28 08:29:53.370620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.281 [2024-11-28 08:29:53.370649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.281 qpair failed and we were unable to recover it. 00:30:56.281 [2024-11-28 08:29:53.371025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.281 [2024-11-28 08:29:53.371053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.281 qpair failed and we were unable to recover it. 00:30:56.281 [2024-11-28 08:29:53.371418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.281 [2024-11-28 08:29:53.371454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.281 qpair failed and we were unable to recover it. 00:30:56.281 [2024-11-28 08:29:53.371820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.281 [2024-11-28 08:29:53.371850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.281 qpair failed and we were unable to recover it. 00:30:56.281 [2024-11-28 08:29:53.372207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.281 [2024-11-28 08:29:53.372239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.281 qpair failed and we were unable to recover it. 00:30:56.281 [2024-11-28 08:29:53.372592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.281 [2024-11-28 08:29:53.372621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.281 qpair failed and we were unable to recover it. 00:30:56.281 [2024-11-28 08:29:53.372994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.281 [2024-11-28 08:29:53.373023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.281 qpair failed and we were unable to recover it. 00:30:56.281 [2024-11-28 08:29:53.373332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.281 [2024-11-28 08:29:53.373362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.281 qpair failed and we were unable to recover it. 00:30:56.281 [2024-11-28 08:29:53.373679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.281 [2024-11-28 08:29:53.373708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.281 qpair failed and we were unable to recover it. 00:30:56.281 [2024-11-28 08:29:53.374076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.281 [2024-11-28 08:29:53.374105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.281 qpair failed and we were unable to recover it. 00:30:56.281 [2024-11-28 08:29:53.374468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.281 [2024-11-28 08:29:53.374498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.281 qpair failed and we were unable to recover it. 00:30:56.281 [2024-11-28 08:29:53.374858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.281 [2024-11-28 08:29:53.374886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.281 qpair failed and we were unable to recover it. 00:30:56.281 [2024-11-28 08:29:53.375233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.281 [2024-11-28 08:29:53.375262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.281 qpair failed and we were unable to recover it. 00:30:56.281 [2024-11-28 08:29:53.375636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.281 [2024-11-28 08:29:53.375665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.281 qpair failed and we were unable to recover it. 00:30:56.281 [2024-11-28 08:29:53.376016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.281 [2024-11-28 08:29:53.376046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.281 qpair failed and we were unable to recover it. 00:30:56.281 [2024-11-28 08:29:53.376407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.281 [2024-11-28 08:29:53.376437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.281 qpair failed and we were unable to recover it. 00:30:56.281 [2024-11-28 08:29:53.376805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.281 [2024-11-28 08:29:53.376834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.281 qpair failed and we were unable to recover it. 00:30:56.281 [2024-11-28 08:29:53.377189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.281 [2024-11-28 08:29:53.377227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.281 qpair failed and we were unable to recover it. 00:30:56.281 [2024-11-28 08:29:53.377589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.281 [2024-11-28 08:29:53.377617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.281 qpair failed and we were unable to recover it. 00:30:56.281 [2024-11-28 08:29:53.377973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.281 [2024-11-28 08:29:53.378001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.281 qpair failed and we were unable to recover it. 00:30:56.281 [2024-11-28 08:29:53.378425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.281 [2024-11-28 08:29:53.378454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.281 qpair failed and we were unable to recover it. 00:30:56.281 [2024-11-28 08:29:53.378811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.281 [2024-11-28 08:29:53.378839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.281 qpair failed and we were unable to recover it. 00:30:56.281 [2024-11-28 08:29:53.379206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.281 [2024-11-28 08:29:53.379237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.281 qpair failed and we were unable to recover it. 00:30:56.281 [2024-11-28 08:29:53.379593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.281 [2024-11-28 08:29:53.379624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.281 qpair failed and we were unable to recover it. 00:30:56.281 [2024-11-28 08:29:53.380007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.281 [2024-11-28 08:29:53.380035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.281 qpair failed and we were unable to recover it. 00:30:56.281 [2024-11-28 08:29:53.380390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.281 [2024-11-28 08:29:53.380421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.281 qpair failed and we were unable to recover it. 00:30:56.281 [2024-11-28 08:29:53.380774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.281 [2024-11-28 08:29:53.380802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.281 qpair failed and we were unable to recover it. 00:30:56.281 [2024-11-28 08:29:53.381174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.281 [2024-11-28 08:29:53.381204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.281 qpair failed and we were unable to recover it. 00:30:56.281 [2024-11-28 08:29:53.381627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.281 [2024-11-28 08:29:53.381656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.282 qpair failed and we were unable to recover it. 00:30:56.282 [2024-11-28 08:29:53.382003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.282 [2024-11-28 08:29:53.382032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.282 qpair failed and we were unable to recover it. 00:30:56.282 [2024-11-28 08:29:53.382411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.282 [2024-11-28 08:29:53.382441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.282 qpair failed and we were unable to recover it. 00:30:56.282 [2024-11-28 08:29:53.382803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.282 [2024-11-28 08:29:53.382833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.282 qpair failed and we were unable to recover it. 00:30:56.282 [2024-11-28 08:29:53.383196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.282 [2024-11-28 08:29:53.383226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.282 qpair failed and we were unable to recover it. 00:30:56.282 [2024-11-28 08:29:53.383614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.282 [2024-11-28 08:29:53.383643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.282 qpair failed and we were unable to recover it. 00:30:56.282 [2024-11-28 08:29:53.384008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.282 [2024-11-28 08:29:53.384036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.282 qpair failed and we were unable to recover it. 00:30:56.282 [2024-11-28 08:29:53.384412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.282 [2024-11-28 08:29:53.384444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.282 qpair failed and we were unable to recover it. 00:30:56.282 [2024-11-28 08:29:53.384800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.282 [2024-11-28 08:29:53.384829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.282 qpair failed and we were unable to recover it. 00:30:56.282 [2024-11-28 08:29:53.385058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.282 [2024-11-28 08:29:53.385091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.282 qpair failed and we were unable to recover it. 00:30:56.282 [2024-11-28 08:29:53.385471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.282 [2024-11-28 08:29:53.385501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.282 qpair failed and we were unable to recover it. 00:30:56.282 [2024-11-28 08:29:53.385855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.282 [2024-11-28 08:29:53.385885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.282 qpair failed and we were unable to recover it. 00:30:56.282 [2024-11-28 08:29:53.386250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.282 [2024-11-28 08:29:53.386279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.282 qpair failed and we were unable to recover it. 00:30:56.282 [2024-11-28 08:29:53.386656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.282 [2024-11-28 08:29:53.386684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.282 qpair failed and we were unable to recover it. 00:30:56.282 [2024-11-28 08:29:53.387042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.282 [2024-11-28 08:29:53.387072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.282 qpair failed and we were unable to recover it. 00:30:56.282 [2024-11-28 08:29:53.387431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.282 [2024-11-28 08:29:53.387461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.282 qpair failed and we were unable to recover it. 00:30:56.282 [2024-11-28 08:29:53.387830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.282 [2024-11-28 08:29:53.387866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.282 qpair failed and we were unable to recover it. 00:30:56.282 [2024-11-28 08:29:53.388248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.282 [2024-11-28 08:29:53.388280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.282 qpair failed and we were unable to recover it. 00:30:56.282 [2024-11-28 08:29:53.388670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.282 [2024-11-28 08:29:53.388698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.282 qpair failed and we were unable to recover it. 00:30:56.282 [2024-11-28 08:29:53.389061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.282 [2024-11-28 08:29:53.389092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.282 qpair failed and we were unable to recover it. 00:30:56.282 [2024-11-28 08:29:53.389468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.282 [2024-11-28 08:29:53.389499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.282 qpair failed and we were unable to recover it. 00:30:56.282 [2024-11-28 08:29:53.389866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.282 [2024-11-28 08:29:53.389894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.282 qpair failed and we were unable to recover it. 00:30:56.282 [2024-11-28 08:29:53.390262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.282 [2024-11-28 08:29:53.390293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.282 qpair failed and we were unable to recover it. 00:30:56.282 [2024-11-28 08:29:53.390679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.282 [2024-11-28 08:29:53.390708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.282 qpair failed and we were unable to recover it. 00:30:56.282 [2024-11-28 08:29:53.391009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.282 [2024-11-28 08:29:53.391039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.282 qpair failed and we were unable to recover it. 00:30:56.282 [2024-11-28 08:29:53.391300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.282 [2024-11-28 08:29:53.391329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.282 qpair failed and we were unable to recover it. 00:30:56.282 [2024-11-28 08:29:53.391704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.282 [2024-11-28 08:29:53.391733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.282 qpair failed and we were unable to recover it. 00:30:56.282 [2024-11-28 08:29:53.392087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.282 [2024-11-28 08:29:53.392118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.282 qpair failed and we were unable to recover it. 00:30:56.282 [2024-11-28 08:29:53.392525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.282 [2024-11-28 08:29:53.392556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.282 qpair failed and we were unable to recover it. 00:30:56.282 [2024-11-28 08:29:53.392808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.283 [2024-11-28 08:29:53.392836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.283 qpair failed and we were unable to recover it. 00:30:56.283 [2024-11-28 08:29:53.392995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.283 [2024-11-28 08:29:53.393029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.283 qpair failed and we were unable to recover it. 00:30:56.283 [2024-11-28 08:29:53.393309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.283 [2024-11-28 08:29:53.393341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.283 qpair failed and we were unable to recover it. 00:30:56.283 [2024-11-28 08:29:53.393707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.283 [2024-11-28 08:29:53.393737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.283 qpair failed and we were unable to recover it. 00:30:56.283 [2024-11-28 08:29:53.394100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.283 [2024-11-28 08:29:53.394131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.283 qpair failed and we were unable to recover it. 00:30:56.283 [2024-11-28 08:29:53.394302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.283 [2024-11-28 08:29:53.394333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.283 qpair failed and we were unable to recover it. 00:30:56.283 [2024-11-28 08:29:53.394669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.283 [2024-11-28 08:29:53.394697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.283 qpair failed and we were unable to recover it. 00:30:56.283 [2024-11-28 08:29:53.395066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.283 [2024-11-28 08:29:53.395095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.283 qpair failed and we were unable to recover it. 00:30:56.283 [2024-11-28 08:29:53.395435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.283 [2024-11-28 08:29:53.395465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.283 qpair failed and we were unable to recover it. 00:30:56.283 [2024-11-28 08:29:53.395740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.283 [2024-11-28 08:29:53.395769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.283 qpair failed and we were unable to recover it. 00:30:56.283 [2024-11-28 08:29:53.396129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.283 [2024-11-28 08:29:53.396169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.283 qpair failed and we were unable to recover it. 00:30:56.283 [2024-11-28 08:29:53.396514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.283 [2024-11-28 08:29:53.396543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.283 qpair failed and we were unable to recover it. 00:30:56.283 [2024-11-28 08:29:53.396956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.283 [2024-11-28 08:29:53.396985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.283 qpair failed and we were unable to recover it. 00:30:56.283 [2024-11-28 08:29:53.397237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.283 [2024-11-28 08:29:53.397268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.283 qpair failed and we were unable to recover it. 00:30:56.283 [2024-11-28 08:29:53.397649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.283 [2024-11-28 08:29:53.397684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.283 qpair failed and we were unable to recover it. 00:30:56.283 [2024-11-28 08:29:53.398048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.283 [2024-11-28 08:29:53.398076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.283 qpair failed and we were unable to recover it. 00:30:56.283 [2024-11-28 08:29:53.398330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.283 [2024-11-28 08:29:53.398360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.283 qpair failed and we were unable to recover it. 00:30:56.283 [2024-11-28 08:29:53.398776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.283 [2024-11-28 08:29:53.398805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.283 qpair failed and we were unable to recover it. 00:30:56.283 [2024-11-28 08:29:53.399175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.283 [2024-11-28 08:29:53.399204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.283 qpair failed and we were unable to recover it. 00:30:56.283 [2024-11-28 08:29:53.399566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.283 [2024-11-28 08:29:53.399595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.283 qpair failed and we were unable to recover it. 00:30:56.283 [2024-11-28 08:29:53.399847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.283 [2024-11-28 08:29:53.399878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.283 qpair failed and we were unable to recover it. 00:30:56.283 [2024-11-28 08:29:53.400229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.283 [2024-11-28 08:29:53.400259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.283 qpair failed and we were unable to recover it. 00:30:56.283 [2024-11-28 08:29:53.400661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.283 [2024-11-28 08:29:53.400690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.283 qpair failed and we were unable to recover it. 00:30:56.284 [2024-11-28 08:29:53.401043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.284 [2024-11-28 08:29:53.401073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.284 qpair failed and we were unable to recover it. 00:30:56.284 [2024-11-28 08:29:53.401418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.284 [2024-11-28 08:29:53.401448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.284 qpair failed and we were unable to recover it. 00:30:56.284 [2024-11-28 08:29:53.401693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.284 [2024-11-28 08:29:53.401722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.284 qpair failed and we were unable to recover it. 00:30:56.284 [2024-11-28 08:29:53.402069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.284 [2024-11-28 08:29:53.402098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.284 qpair failed and we were unable to recover it. 00:30:56.284 [2024-11-28 08:29:53.402519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.284 [2024-11-28 08:29:53.402549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.284 qpair failed and we were unable to recover it. 00:30:56.284 [2024-11-28 08:29:53.402914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.284 [2024-11-28 08:29:53.402943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.284 qpair failed and we were unable to recover it. 00:30:56.284 [2024-11-28 08:29:53.403319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.284 [2024-11-28 08:29:53.403350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.284 qpair failed and we were unable to recover it. 00:30:56.284 [2024-11-28 08:29:53.403787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.284 [2024-11-28 08:29:53.403816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.284 qpair failed and we were unable to recover it. 00:30:56.284 [2024-11-28 08:29:53.404199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.284 [2024-11-28 08:29:53.404230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.284 qpair failed and we were unable to recover it. 00:30:56.284 [2024-11-28 08:29:53.404374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.284 [2024-11-28 08:29:53.404406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.284 qpair failed and we were unable to recover it. 00:30:56.284 [2024-11-28 08:29:53.404807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.284 [2024-11-28 08:29:53.404837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.284 qpair failed and we were unable to recover it. 00:30:56.284 [2024-11-28 08:29:53.405207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.284 [2024-11-28 08:29:53.405238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.284 qpair failed and we were unable to recover it. 00:30:56.284 [2024-11-28 08:29:53.405605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.284 [2024-11-28 08:29:53.405635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.284 qpair failed and we were unable to recover it. 00:30:56.284 [2024-11-28 08:29:53.405904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.284 [2024-11-28 08:29:53.405933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.284 qpair failed and we were unable to recover it. 00:30:56.284 [2024-11-28 08:29:53.406178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.284 [2024-11-28 08:29:53.406209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.284 qpair failed and we were unable to recover it. 00:30:56.284 [2024-11-28 08:29:53.406581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.284 [2024-11-28 08:29:53.406610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.284 qpair failed and we were unable to recover it. 00:30:56.284 [2024-11-28 08:29:53.406975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.284 [2024-11-28 08:29:53.407005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.284 qpair failed and we were unable to recover it. 00:30:56.284 [2024-11-28 08:29:53.407345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.284 [2024-11-28 08:29:53.407375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.284 qpair failed and we were unable to recover it. 00:30:56.284 [2024-11-28 08:29:53.407761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.284 [2024-11-28 08:29:53.407791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.284 qpair failed and we were unable to recover it. 00:30:56.284 [2024-11-28 08:29:53.408188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.284 [2024-11-28 08:29:53.408220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.284 qpair failed and we were unable to recover it. 00:30:56.284 [2024-11-28 08:29:53.408576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.284 [2024-11-28 08:29:53.408605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.284 qpair failed and we were unable to recover it. 00:30:56.284 [2024-11-28 08:29:53.408975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.284 [2024-11-28 08:29:53.409005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.284 qpair failed and we were unable to recover it. 00:30:56.284 [2024-11-28 08:29:53.409387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.284 [2024-11-28 08:29:53.409418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.284 qpair failed and we were unable to recover it. 00:30:56.284 [2024-11-28 08:29:53.409780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.284 [2024-11-28 08:29:53.409809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.284 qpair failed and we were unable to recover it. 00:30:56.284 [2024-11-28 08:29:53.410185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.284 [2024-11-28 08:29:53.410215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.284 qpair failed and we were unable to recover it. 00:30:56.284 [2024-11-28 08:29:53.410573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.284 [2024-11-28 08:29:53.410601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.284 qpair failed and we were unable to recover it. 00:30:56.284 [2024-11-28 08:29:53.410969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.284 [2024-11-28 08:29:53.410997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.284 qpair failed and we were unable to recover it. 00:30:56.284 [2024-11-28 08:29:53.411384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.284 [2024-11-28 08:29:53.411415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.284 qpair failed and we were unable to recover it. 00:30:56.284 [2024-11-28 08:29:53.411773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.284 [2024-11-28 08:29:53.411802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.284 qpair failed and we were unable to recover it. 00:30:56.285 [2024-11-28 08:29:53.412170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.285 [2024-11-28 08:29:53.412200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.285 qpair failed and we were unable to recover it. 00:30:56.285 [2024-11-28 08:29:53.412575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.285 [2024-11-28 08:29:53.412607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.285 qpair failed and we were unable to recover it. 00:30:56.285 [2024-11-28 08:29:53.413013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.285 [2024-11-28 08:29:53.413042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.285 qpair failed and we were unable to recover it. 00:30:56.285 [2024-11-28 08:29:53.413383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.285 [2024-11-28 08:29:53.413415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.285 qpair failed and we were unable to recover it. 00:30:56.285 [2024-11-28 08:29:53.413686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.285 [2024-11-28 08:29:53.413715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.285 qpair failed and we were unable to recover it. 00:30:56.285 [2024-11-28 08:29:53.414094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.285 [2024-11-28 08:29:53.414124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.285 qpair failed and we were unable to recover it. 00:30:56.285 [2024-11-28 08:29:53.414549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.285 [2024-11-28 08:29:53.414579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.285 qpair failed and we were unable to recover it. 00:30:56.285 [2024-11-28 08:29:53.414812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.285 [2024-11-28 08:29:53.414841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.285 qpair failed and we were unable to recover it. 00:30:56.285 [2024-11-28 08:29:53.415196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.285 [2024-11-28 08:29:53.415226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.285 qpair failed and we were unable to recover it. 00:30:56.285 [2024-11-28 08:29:53.415592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.285 [2024-11-28 08:29:53.415620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.285 qpair failed and we were unable to recover it. 00:30:56.285 [2024-11-28 08:29:53.416017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.285 [2024-11-28 08:29:53.416045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.285 qpair failed and we were unable to recover it. 00:30:56.285 [2024-11-28 08:29:53.416401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.285 [2024-11-28 08:29:53.416432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.285 qpair failed and we were unable to recover it. 00:30:56.285 [2024-11-28 08:29:53.416793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.285 [2024-11-28 08:29:53.416822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.285 qpair failed and we were unable to recover it. 00:30:56.285 [2024-11-28 08:29:53.417080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.285 [2024-11-28 08:29:53.417108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.285 qpair failed and we were unable to recover it. 00:30:56.285 [2024-11-28 08:29:53.417354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.285 [2024-11-28 08:29:53.417385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.285 qpair failed and we were unable to recover it. 00:30:56.285 [2024-11-28 08:29:53.417758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.285 [2024-11-28 08:29:53.417788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.285 qpair failed and we were unable to recover it. 00:30:56.285 [2024-11-28 08:29:53.418029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.285 [2024-11-28 08:29:53.418061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.285 qpair failed and we were unable to recover it. 00:30:56.285 [2024-11-28 08:29:53.418347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.285 [2024-11-28 08:29:53.418379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.285 qpair failed and we were unable to recover it. 00:30:56.285 [2024-11-28 08:29:53.418626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.285 [2024-11-28 08:29:53.418655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.285 qpair failed and we were unable to recover it. 00:30:56.285 [2024-11-28 08:29:53.418982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.285 [2024-11-28 08:29:53.419011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.285 qpair failed and we were unable to recover it. 00:30:56.285 [2024-11-28 08:29:53.419384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.285 [2024-11-28 08:29:53.419415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.285 qpair failed and we were unable to recover it. 00:30:56.285 [2024-11-28 08:29:53.419662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.285 [2024-11-28 08:29:53.419690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.285 qpair failed and we were unable to recover it. 00:30:56.285 [2024-11-28 08:29:53.420055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.285 [2024-11-28 08:29:53.420084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.285 qpair failed and we were unable to recover it. 00:30:56.285 [2024-11-28 08:29:53.420451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.285 [2024-11-28 08:29:53.420481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.285 qpair failed and we were unable to recover it. 00:30:56.285 [2024-11-28 08:29:53.420850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.285 [2024-11-28 08:29:53.420879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.285 qpair failed and we were unable to recover it. 00:30:56.285 [2024-11-28 08:29:53.421256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.285 [2024-11-28 08:29:53.421285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.285 qpair failed and we were unable to recover it. 00:30:56.285 [2024-11-28 08:29:53.421645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.285 [2024-11-28 08:29:53.421675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.285 qpair failed and we were unable to recover it. 00:30:56.285 [2024-11-28 08:29:53.422049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.285 [2024-11-28 08:29:53.422078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.285 qpair failed and we were unable to recover it. 00:30:56.285 [2024-11-28 08:29:53.422198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.285 [2024-11-28 08:29:53.422226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.285 qpair failed and we were unable to recover it. 00:30:56.285 [2024-11-28 08:29:53.422615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.285 [2024-11-28 08:29:53.422645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.285 qpair failed and we were unable to recover it. 00:30:56.285 [2024-11-28 08:29:53.423081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.286 [2024-11-28 08:29:53.423116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.286 qpair failed and we were unable to recover it. 00:30:56.286 [2024-11-28 08:29:53.423564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.286 [2024-11-28 08:29:53.423594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.286 qpair failed and we were unable to recover it. 00:30:56.286 [2024-11-28 08:29:53.423960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.286 [2024-11-28 08:29:53.423990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.286 qpair failed and we were unable to recover it. 00:30:56.286 [2024-11-28 08:29:53.424390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.286 [2024-11-28 08:29:53.424421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.286 qpair failed and we were unable to recover it. 00:30:56.286 [2024-11-28 08:29:53.424680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.286 [2024-11-28 08:29:53.424710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.286 qpair failed and we were unable to recover it. 00:30:56.286 [2024-11-28 08:29:53.425079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.286 [2024-11-28 08:29:53.425108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.286 qpair failed and we were unable to recover it. 00:30:56.286 [2024-11-28 08:29:53.425549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.286 [2024-11-28 08:29:53.425579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.286 qpair failed and we were unable to recover it. 00:30:56.286 [2024-11-28 08:29:53.425833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.286 [2024-11-28 08:29:53.425862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.286 qpair failed and we were unable to recover it. 00:30:56.286 [2024-11-28 08:29:53.426217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.286 [2024-11-28 08:29:53.426247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.286 qpair failed and we were unable to recover it. 00:30:56.286 [2024-11-28 08:29:53.426491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.286 [2024-11-28 08:29:53.426519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.286 qpair failed and we were unable to recover it. 00:30:56.286 [2024-11-28 08:29:53.426879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.286 [2024-11-28 08:29:53.426908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.286 qpair failed and we were unable to recover it. 00:30:56.286 [2024-11-28 08:29:53.427276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.286 [2024-11-28 08:29:53.427306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.286 qpair failed and we were unable to recover it. 00:30:56.286 [2024-11-28 08:29:53.427694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.286 [2024-11-28 08:29:53.427723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.286 qpair failed and we were unable to recover it. 00:30:56.286 [2024-11-28 08:29:53.428064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.286 [2024-11-28 08:29:53.428094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.286 qpair failed and we were unable to recover it. 00:30:56.286 [2024-11-28 08:29:53.428453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.286 [2024-11-28 08:29:53.428484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.286 qpair failed and we were unable to recover it. 00:30:56.286 [2024-11-28 08:29:53.428859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.286 [2024-11-28 08:29:53.428887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.286 qpair failed and we were unable to recover it. 00:30:56.286 [2024-11-28 08:29:53.429251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.286 [2024-11-28 08:29:53.429282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.286 qpair failed and we were unable to recover it. 00:30:56.286 [2024-11-28 08:29:53.429649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.286 [2024-11-28 08:29:53.429678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.286 qpair failed and we were unable to recover it. 00:30:56.286 [2024-11-28 08:29:53.429891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.286 [2024-11-28 08:29:53.429919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.286 qpair failed and we were unable to recover it. 00:30:56.286 [2024-11-28 08:29:53.430303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.286 [2024-11-28 08:29:53.430333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.286 qpair failed and we were unable to recover it. 00:30:56.286 [2024-11-28 08:29:53.430773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.286 [2024-11-28 08:29:53.430802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.286 qpair failed and we were unable to recover it. 00:30:56.286 [2024-11-28 08:29:53.431059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.286 [2024-11-28 08:29:53.431087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.286 qpair failed and we were unable to recover it. 00:30:56.286 [2024-11-28 08:29:53.431440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.286 [2024-11-28 08:29:53.431470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.286 qpair failed and we were unable to recover it. 00:30:56.286 [2024-11-28 08:29:53.431833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.286 [2024-11-28 08:29:53.431863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.286 qpair failed and we were unable to recover it. 00:30:56.286 [2024-11-28 08:29:53.432228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.286 [2024-11-28 08:29:53.432259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.286 qpair failed and we were unable to recover it. 00:30:56.286 [2024-11-28 08:29:53.432524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.286 [2024-11-28 08:29:53.432553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.286 qpair failed and we were unable to recover it. 00:30:56.286 [2024-11-28 08:29:53.432911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.286 [2024-11-28 08:29:53.432940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.286 qpair failed and we were unable to recover it. 00:30:56.286 [2024-11-28 08:29:53.433312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.286 [2024-11-28 08:29:53.433348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.286 qpair failed and we were unable to recover it. 00:30:56.286 [2024-11-28 08:29:53.433733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.286 [2024-11-28 08:29:53.433763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.286 qpair failed and we were unable to recover it. 00:30:56.286 [2024-11-28 08:29:53.434215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.286 [2024-11-28 08:29:53.434245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.286 qpair failed and we were unable to recover it. 00:30:56.287 [2024-11-28 08:29:53.434580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.287 [2024-11-28 08:29:53.434610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.287 qpair failed and we were unable to recover it. 00:30:56.287 [2024-11-28 08:29:53.434839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.287 [2024-11-28 08:29:53.434867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.287 qpair failed and we were unable to recover it. 00:30:56.287 [2024-11-28 08:29:53.435251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.287 [2024-11-28 08:29:53.435281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.287 qpair failed and we were unable to recover it. 00:30:56.287 [2024-11-28 08:29:53.435522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.287 [2024-11-28 08:29:53.435551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.287 qpair failed and we were unable to recover it. 00:30:56.287 [2024-11-28 08:29:53.436017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.287 [2024-11-28 08:29:53.436046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.287 qpair failed and we were unable to recover it. 00:30:56.287 [2024-11-28 08:29:53.436436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.287 [2024-11-28 08:29:53.436466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.287 qpair failed and we were unable to recover it. 00:30:56.287 [2024-11-28 08:29:53.436801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.287 [2024-11-28 08:29:53.436832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.287 qpair failed and we were unable to recover it. 00:30:56.287 [2024-11-28 08:29:53.437216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.287 [2024-11-28 08:29:53.437246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.287 qpair failed and we were unable to recover it. 00:30:56.287 [2024-11-28 08:29:53.437602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.287 [2024-11-28 08:29:53.437632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.287 qpair failed and we were unable to recover it. 00:30:56.287 [2024-11-28 08:29:53.438051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.287 [2024-11-28 08:29:53.438080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.287 qpair failed and we were unable to recover it. 00:30:56.287 [2024-11-28 08:29:53.438330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.287 [2024-11-28 08:29:53.438363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.287 qpair failed and we were unable to recover it. 00:30:56.287 [2024-11-28 08:29:53.438746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.287 [2024-11-28 08:29:53.438776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.287 qpair failed and we were unable to recover it. 00:30:56.287 [2024-11-28 08:29:53.439232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.287 [2024-11-28 08:29:53.439263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.287 qpair failed and we were unable to recover it. 00:30:56.287 [2024-11-28 08:29:53.439617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.287 [2024-11-28 08:29:53.439646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.287 qpair failed and we were unable to recover it. 00:30:56.287 [2024-11-28 08:29:53.440011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.287 [2024-11-28 08:29:53.440039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.287 qpair failed and we were unable to recover it. 00:30:56.287 [2024-11-28 08:29:53.440191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.287 [2024-11-28 08:29:53.440223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.287 qpair failed and we were unable to recover it. 00:30:56.287 [2024-11-28 08:29:53.440619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.287 [2024-11-28 08:29:53.440648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.287 qpair failed and we were unable to recover it. 00:30:56.287 [2024-11-28 08:29:53.440989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.287 [2024-11-28 08:29:53.441019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.287 qpair failed and we were unable to recover it. 00:30:56.287 [2024-11-28 08:29:53.441394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.287 [2024-11-28 08:29:53.441424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.287 qpair failed and we were unable to recover it. 00:30:56.287 [2024-11-28 08:29:53.441806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.287 [2024-11-28 08:29:53.441836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.287 qpair failed and we were unable to recover it. 00:30:56.287 [2024-11-28 08:29:53.442195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.287 [2024-11-28 08:29:53.442225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.287 qpair failed and we were unable to recover it. 00:30:56.287 [2024-11-28 08:29:53.442641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.287 [2024-11-28 08:29:53.442670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.287 qpair failed and we were unable to recover it. 00:30:56.287 [2024-11-28 08:29:53.442896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.287 [2024-11-28 08:29:53.442925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.287 qpair failed and we were unable to recover it. 00:30:56.287 [2024-11-28 08:29:53.443205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.287 [2024-11-28 08:29:53.443235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.287 qpair failed and we were unable to recover it. 00:30:56.287 [2024-11-28 08:29:53.443459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.287 [2024-11-28 08:29:53.443488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.287 qpair failed and we were unable to recover it. 00:30:56.287 [2024-11-28 08:29:53.443761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.287 [2024-11-28 08:29:53.443791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.287 qpair failed and we were unable to recover it. 00:30:56.287 [2024-11-28 08:29:53.444145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.287 [2024-11-28 08:29:53.444202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.287 qpair failed and we were unable to recover it. 00:30:56.287 [2024-11-28 08:29:53.444575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.287 [2024-11-28 08:29:53.444605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.287 qpair failed and we were unable to recover it. 00:30:56.287 [2024-11-28 08:29:53.444832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.287 [2024-11-28 08:29:53.444861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.287 qpair failed and we were unable to recover it. 00:30:56.287 [2024-11-28 08:29:53.445094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.287 [2024-11-28 08:29:53.445127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.287 qpair failed and we were unable to recover it. 00:30:56.287 [2024-11-28 08:29:53.445513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.287 [2024-11-28 08:29:53.445543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.288 qpair failed and we were unable to recover it. 00:30:56.288 [2024-11-28 08:29:53.445899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.288 [2024-11-28 08:29:53.445928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.288 qpair failed and we were unable to recover it. 00:30:56.288 [2024-11-28 08:29:53.446172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.288 [2024-11-28 08:29:53.446203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.288 qpair failed and we were unable to recover it. 00:30:56.288 [2024-11-28 08:29:53.446577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.288 [2024-11-28 08:29:53.446606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.288 qpair failed and we were unable to recover it. 00:30:56.288 [2024-11-28 08:29:53.446820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.288 [2024-11-28 08:29:53.446848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.288 qpair failed and we were unable to recover it. 00:30:56.288 [2024-11-28 08:29:53.447222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.288 [2024-11-28 08:29:53.447252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.288 qpair failed and we were unable to recover it. 00:30:56.288 [2024-11-28 08:29:53.447613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.288 [2024-11-28 08:29:53.447642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.288 qpair failed and we were unable to recover it. 00:30:56.288 [2024-11-28 08:29:53.448069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.288 [2024-11-28 08:29:53.448098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.288 qpair failed and we were unable to recover it. 00:30:56.288 [2024-11-28 08:29:53.448498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.288 [2024-11-28 08:29:53.448529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.288 qpair failed and we were unable to recover it. 00:30:56.288 [2024-11-28 08:29:53.448792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.288 [2024-11-28 08:29:53.448820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.288 qpair failed and we were unable to recover it. 00:30:56.288 [2024-11-28 08:29:53.449040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.288 [2024-11-28 08:29:53.449069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.288 qpair failed and we were unable to recover it. 00:30:56.288 [2024-11-28 08:29:53.449450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.288 [2024-11-28 08:29:53.449480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.288 qpair failed and we were unable to recover it. 00:30:56.288 [2024-11-28 08:29:53.449901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.288 [2024-11-28 08:29:53.449931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.288 qpair failed and we were unable to recover it. 00:30:56.288 [2024-11-28 08:29:53.450305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.288 [2024-11-28 08:29:53.450336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.288 qpair failed and we were unable to recover it. 00:30:56.288 [2024-11-28 08:29:53.450444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.288 [2024-11-28 08:29:53.450474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:56.288 qpair failed and we were unable to recover it. 00:30:56.288 Read completed with error (sct=0, sc=8) 00:30:56.288 starting I/O failed 00:30:56.288 Read completed with error (sct=0, sc=8) 00:30:56.288 starting I/O failed 00:30:56.288 Read completed with error (sct=0, sc=8) 00:30:56.288 starting I/O failed 00:30:56.288 Read completed with error (sct=0, sc=8) 00:30:56.288 starting I/O failed 00:30:56.288 Read completed with error (sct=0, sc=8) 00:30:56.288 starting I/O failed 00:30:56.288 Read completed with error (sct=0, sc=8) 00:30:56.288 starting I/O failed 00:30:56.288 Read completed with error (sct=0, sc=8) 00:30:56.288 starting I/O failed 00:30:56.288 Read completed with error (sct=0, sc=8) 00:30:56.288 starting I/O failed 00:30:56.288 Read completed with error (sct=0, sc=8) 00:30:56.288 starting I/O failed 00:30:56.288 Read completed with error (sct=0, sc=8) 00:30:56.288 starting I/O failed 00:30:56.288 Read completed with error (sct=0, sc=8) 00:30:56.288 starting I/O failed 00:30:56.288 Read completed with error (sct=0, sc=8) 00:30:56.288 starting I/O failed 00:30:56.288 Read completed with error (sct=0, sc=8) 00:30:56.288 starting I/O failed 00:30:56.288 Read completed with error (sct=0, sc=8) 00:30:56.288 starting I/O failed 00:30:56.288 Read completed with error (sct=0, sc=8) 00:30:56.288 starting I/O failed 00:30:56.288 Write completed with error (sct=0, sc=8) 00:30:56.288 starting I/O failed 00:30:56.288 Read completed with error (sct=0, sc=8) 00:30:56.288 starting I/O failed 00:30:56.288 Read completed with error (sct=0, sc=8) 00:30:56.288 starting I/O failed 00:30:56.288 Read completed with error (sct=0, sc=8) 00:30:56.288 starting I/O failed 00:30:56.288 Write completed with error (sct=0, sc=8) 00:30:56.288 starting I/O failed 00:30:56.288 Read completed with error (sct=0, sc=8) 00:30:56.288 starting I/O failed 00:30:56.288 Read completed with error (sct=0, sc=8) 00:30:56.288 starting I/O failed 00:30:56.288 Write completed with error (sct=0, sc=8) 00:30:56.288 starting I/O failed 00:30:56.288 Read completed with error (sct=0, sc=8) 00:30:56.288 starting I/O failed 00:30:56.288 Write completed with error (sct=0, sc=8) 00:30:56.288 starting I/O failed 00:30:56.288 Read completed with error (sct=0, sc=8) 00:30:56.288 starting I/O failed 00:30:56.288 Write completed with error (sct=0, sc=8) 00:30:56.288 starting I/O failed 00:30:56.288 Read completed with error (sct=0, sc=8) 00:30:56.288 starting I/O failed 00:30:56.288 Read completed with error (sct=0, sc=8) 00:30:56.288 starting I/O failed 00:30:56.288 Write completed with error (sct=0, sc=8) 00:30:56.288 starting I/O failed 00:30:56.288 Read completed with error (sct=0, sc=8) 00:30:56.288 starting I/O failed 00:30:56.288 Write completed with error (sct=0, sc=8) 00:30:56.288 starting I/O failed 00:30:56.288 [2024-11-28 08:29:53.451302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:56.288 [2024-11-28 08:29:53.451823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.288 [2024-11-28 08:29:53.451880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.288 qpair failed and we were unable to recover it. 00:30:56.288 [2024-11-28 08:29:53.452211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.288 [2024-11-28 08:29:53.452248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.288 qpair failed and we were unable to recover it. 00:30:56.288 [2024-11-28 08:29:53.452497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.288 [2024-11-28 08:29:53.452534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.288 qpair failed and we were unable to recover it. 00:30:56.288 [2024-11-28 08:29:53.452896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.289 [2024-11-28 08:29:53.452926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.289 qpair failed and we were unable to recover it. 00:30:56.289 [2024-11-28 08:29:53.453449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.289 [2024-11-28 08:29:53.453553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.289 qpair failed and we were unable to recover it. 00:30:56.289 [2024-11-28 08:29:53.453930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.289 [2024-11-28 08:29:53.453968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.289 qpair failed and we were unable to recover it. 00:30:56.289 [2024-11-28 08:29:53.454416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.289 [2024-11-28 08:29:53.454520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.289 qpair failed and we were unable to recover it. 00:30:56.289 [2024-11-28 08:29:53.454979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.289 [2024-11-28 08:29:53.455017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.289 qpair failed and we were unable to recover it. 00:30:56.289 [2024-11-28 08:29:53.455420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.289 [2024-11-28 08:29:53.455453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.289 qpair failed and we were unable to recover it. 00:30:56.289 [2024-11-28 08:29:53.455709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.289 [2024-11-28 08:29:53.455738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.289 qpair failed and we were unable to recover it. 00:30:56.289 [2024-11-28 08:29:53.456101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.289 [2024-11-28 08:29:53.456134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.289 qpair failed and we were unable to recover it. 00:30:56.289 [2024-11-28 08:29:53.456538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.289 [2024-11-28 08:29:53.456569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.289 qpair failed and we were unable to recover it. 00:30:56.289 [2024-11-28 08:29:53.456935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.289 [2024-11-28 08:29:53.456965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.289 qpair failed and we were unable to recover it. 00:30:56.289 [2024-11-28 08:29:53.457240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.289 [2024-11-28 08:29:53.457273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.289 qpair failed and we were unable to recover it. 00:30:56.289 [2024-11-28 08:29:53.457640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.289 [2024-11-28 08:29:53.457671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.289 qpair failed and we were unable to recover it. 00:30:56.289 [2024-11-28 08:29:53.458088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.289 [2024-11-28 08:29:53.458119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.289 qpair failed and we were unable to recover it. 00:30:56.289 [2024-11-28 08:29:53.458376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.289 [2024-11-28 08:29:53.458407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.289 qpair failed and we were unable to recover it. 00:30:56.289 [2024-11-28 08:29:53.458762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.289 [2024-11-28 08:29:53.458791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.289 qpair failed and we were unable to recover it. 00:30:56.289 [2024-11-28 08:29:53.459017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.289 [2024-11-28 08:29:53.459045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.289 qpair failed and we were unable to recover it. 00:30:56.289 [2024-11-28 08:29:53.459195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.289 [2024-11-28 08:29:53.459228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.289 qpair failed and we were unable to recover it. 00:30:56.289 [2024-11-28 08:29:53.459574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.289 [2024-11-28 08:29:53.459603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.289 qpair failed and we were unable to recover it. 00:30:56.289 [2024-11-28 08:29:53.459953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.289 [2024-11-28 08:29:53.459982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.289 qpair failed and we were unable to recover it. 00:30:56.289 [2024-11-28 08:29:53.460368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.289 [2024-11-28 08:29:53.460398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.289 qpair failed and we were unable to recover it. 00:30:56.289 [2024-11-28 08:29:53.460654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.289 [2024-11-28 08:29:53.460683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.289 qpair failed and we were unable to recover it. 00:30:56.289 [2024-11-28 08:29:53.460925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.289 [2024-11-28 08:29:53.460953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.289 qpair failed and we were unable to recover it. 00:30:56.289 [2024-11-28 08:29:53.461303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.289 [2024-11-28 08:29:53.461334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.289 qpair failed and we were unable to recover it. 00:30:56.289 [2024-11-28 08:29:53.461705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.289 [2024-11-28 08:29:53.461750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.289 qpair failed and we were unable to recover it. 00:30:56.289 [2024-11-28 08:29:53.462129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.290 [2024-11-28 08:29:53.462165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.290 qpair failed and we were unable to recover it. 00:30:56.290 [2024-11-28 08:29:53.462410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.290 [2024-11-28 08:29:53.462438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.290 qpair failed and we were unable to recover it. 00:30:56.290 [2024-11-28 08:29:53.462812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.290 [2024-11-28 08:29:53.462842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.290 qpair failed and we were unable to recover it. 00:30:56.290 [2024-11-28 08:29:53.463180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.290 [2024-11-28 08:29:53.463211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.290 qpair failed and we were unable to recover it. 00:30:56.290 [2024-11-28 08:29:53.463571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.290 [2024-11-28 08:29:53.463600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.290 qpair failed and we were unable to recover it. 00:30:56.290 [2024-11-28 08:29:53.463964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.290 [2024-11-28 08:29:53.463993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.290 qpair failed and we were unable to recover it. 00:30:56.290 [2024-11-28 08:29:53.464259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.290 [2024-11-28 08:29:53.464289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.290 qpair failed and we were unable to recover it. 00:30:56.290 [2024-11-28 08:29:53.464675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.290 [2024-11-28 08:29:53.464704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.290 qpair failed and we were unable to recover it. 00:30:56.290 [2024-11-28 08:29:53.465082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.290 [2024-11-28 08:29:53.465113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.290 qpair failed and we were unable to recover it. 00:30:56.290 [2024-11-28 08:29:53.465405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.290 [2024-11-28 08:29:53.465436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.290 qpair failed and we were unable to recover it. 00:30:56.290 [2024-11-28 08:29:53.465809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.290 [2024-11-28 08:29:53.465839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.290 qpair failed and we were unable to recover it. 00:30:56.290 [2024-11-28 08:29:53.466212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.290 [2024-11-28 08:29:53.466244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.290 qpair failed and we were unable to recover it. 00:30:56.290 [2024-11-28 08:29:53.466637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.290 [2024-11-28 08:29:53.466674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.290 qpair failed and we were unable to recover it. 00:30:56.290 [2024-11-28 08:29:53.467042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.290 [2024-11-28 08:29:53.467072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.290 qpair failed and we were unable to recover it. 00:30:56.290 [2024-11-28 08:29:53.467421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.290 [2024-11-28 08:29:53.467451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.290 qpair failed and we were unable to recover it. 00:30:56.290 [2024-11-28 08:29:53.467817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.290 [2024-11-28 08:29:53.467845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.290 qpair failed and we were unable to recover it. 00:30:56.290 [2024-11-28 08:29:53.468213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.290 [2024-11-28 08:29:53.468243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.290 qpair failed and we were unable to recover it. 00:30:56.290 [2024-11-28 08:29:53.468592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.290 [2024-11-28 08:29:53.468621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.290 qpair failed and we were unable to recover it. 00:30:56.290 [2024-11-28 08:29:53.468975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.290 [2024-11-28 08:29:53.469004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.290 qpair failed and we were unable to recover it. 00:30:56.290 [2024-11-28 08:29:53.469344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.290 [2024-11-28 08:29:53.469374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.290 qpair failed and we were unable to recover it. 00:30:56.290 [2024-11-28 08:29:53.469730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.290 [2024-11-28 08:29:53.469759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.290 qpair failed and we were unable to recover it. 00:30:56.290 [2024-11-28 08:29:53.470120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.290 [2024-11-28 08:29:53.470149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.290 qpair failed and we were unable to recover it. 00:30:56.290 [2024-11-28 08:29:53.470521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.290 [2024-11-28 08:29:53.470550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.290 qpair failed and we were unable to recover it. 00:30:56.290 [2024-11-28 08:29:53.470914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.290 [2024-11-28 08:29:53.470942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.290 qpair failed and we were unable to recover it. 00:30:56.290 [2024-11-28 08:29:53.471297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.290 [2024-11-28 08:29:53.471329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.290 qpair failed and we were unable to recover it. 00:30:56.290 [2024-11-28 08:29:53.471706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.290 [2024-11-28 08:29:53.471735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.290 qpair failed and we were unable to recover it. 00:30:56.290 [2024-11-28 08:29:53.472091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.290 [2024-11-28 08:29:53.472120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.290 qpair failed and we were unable to recover it. 00:30:56.290 [2024-11-28 08:29:53.472468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.290 [2024-11-28 08:29:53.472498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.290 qpair failed and we were unable to recover it. 00:30:56.291 [2024-11-28 08:29:53.472850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.291 [2024-11-28 08:29:53.472879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.291 qpair failed and we were unable to recover it. 00:30:56.291 [2024-11-28 08:29:53.473238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.291 [2024-11-28 08:29:53.473289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.291 qpair failed and we were unable to recover it. 00:30:56.291 [2024-11-28 08:29:53.473670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.291 [2024-11-28 08:29:53.473700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.291 qpair failed and we were unable to recover it. 00:30:56.291 [2024-11-28 08:29:53.474062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.291 [2024-11-28 08:29:53.474092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.291 qpair failed and we were unable to recover it. 00:30:56.291 [2024-11-28 08:29:53.474471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.291 [2024-11-28 08:29:53.474501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.291 qpair failed and we were unable to recover it. 00:30:56.291 [2024-11-28 08:29:53.474862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.291 [2024-11-28 08:29:53.474892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.291 qpair failed and we were unable to recover it. 00:30:56.291 [2024-11-28 08:29:53.475260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.291 [2024-11-28 08:29:53.475291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.291 qpair failed and we were unable to recover it. 00:30:56.291 [2024-11-28 08:29:53.475663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.291 [2024-11-28 08:29:53.475691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.291 qpair failed and we were unable to recover it. 00:30:56.291 [2024-11-28 08:29:53.476065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.291 [2024-11-28 08:29:53.476094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.291 qpair failed and we were unable to recover it. 00:30:56.291 [2024-11-28 08:29:53.476471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.291 [2024-11-28 08:29:53.476501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.291 qpair failed and we were unable to recover it. 00:30:56.291 [2024-11-28 08:29:53.476763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.291 [2024-11-28 08:29:53.476796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.291 qpair failed and we were unable to recover it. 00:30:56.291 [2024-11-28 08:29:53.477132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.291 [2024-11-28 08:29:53.477192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.291 qpair failed and we were unable to recover it. 00:30:56.291 [2024-11-28 08:29:53.477546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.291 [2024-11-28 08:29:53.477575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.291 qpair failed and we were unable to recover it. 00:30:56.291 [2024-11-28 08:29:53.477940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.291 [2024-11-28 08:29:53.477979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.291 qpair failed and we were unable to recover it. 00:30:56.291 [2024-11-28 08:29:53.478328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.291 [2024-11-28 08:29:53.478358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.291 qpair failed and we were unable to recover it. 00:30:56.291 [2024-11-28 08:29:53.478729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.291 [2024-11-28 08:29:53.478758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.291 qpair failed and we were unable to recover it. 00:30:56.291 [2024-11-28 08:29:53.479104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.291 [2024-11-28 08:29:53.479134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.291 qpair failed and we were unable to recover it. 00:30:56.291 [2024-11-28 08:29:53.479366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.291 [2024-11-28 08:29:53.479397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.291 qpair failed and we were unable to recover it. 00:30:56.291 [2024-11-28 08:29:53.479749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.291 [2024-11-28 08:29:53.479779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.291 qpair failed and we were unable to recover it. 00:30:56.291 [2024-11-28 08:29:53.480146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.291 [2024-11-28 08:29:53.480186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.291 qpair failed and we were unable to recover it. 00:30:56.291 [2024-11-28 08:29:53.480542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.291 [2024-11-28 08:29:53.480578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.291 qpair failed and we were unable to recover it. 00:30:56.291 [2024-11-28 08:29:53.480918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.291 [2024-11-28 08:29:53.480946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.291 qpair failed and we were unable to recover it. 00:30:56.291 [2024-11-28 08:29:53.481203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.291 [2024-11-28 08:29:53.481237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.291 qpair failed and we were unable to recover it. 00:30:56.291 [2024-11-28 08:29:53.481604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.291 [2024-11-28 08:29:53.481634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.291 qpair failed and we were unable to recover it. 00:30:56.291 [2024-11-28 08:29:53.481976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.291 [2024-11-28 08:29:53.482005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.291 qpair failed and we were unable to recover it. 00:30:56.291 [2024-11-28 08:29:53.482385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.291 [2024-11-28 08:29:53.482418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.291 qpair failed and we were unable to recover it. 00:30:56.291 [2024-11-28 08:29:53.482785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.291 [2024-11-28 08:29:53.482818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.291 qpair failed and we were unable to recover it. 00:30:56.291 [2024-11-28 08:29:53.483154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.291 [2024-11-28 08:29:53.483196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.291 qpair failed and we were unable to recover it. 00:30:56.291 [2024-11-28 08:29:53.483526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.291 [2024-11-28 08:29:53.483555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.292 qpair failed and we were unable to recover it. 00:30:56.292 [2024-11-28 08:29:53.483911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.292 [2024-11-28 08:29:53.483941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.292 qpair failed and we were unable to recover it. 00:30:56.292 [2024-11-28 08:29:53.484303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.292 [2024-11-28 08:29:53.484333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.292 qpair failed and we were unable to recover it. 00:30:56.292 [2024-11-28 08:29:53.484697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.292 [2024-11-28 08:29:53.484727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.292 qpair failed and we were unable to recover it. 00:30:56.292 [2024-11-28 08:29:53.484973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.292 [2024-11-28 08:29:53.485008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.292 qpair failed and we were unable to recover it. 00:30:56.292 [2024-11-28 08:29:53.485348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.292 [2024-11-28 08:29:53.485379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.292 qpair failed and we were unable to recover it. 00:30:56.292 [2024-11-28 08:29:53.485652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.292 [2024-11-28 08:29:53.485682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.292 qpair failed and we were unable to recover it. 00:30:56.292 [2024-11-28 08:29:53.486034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.292 [2024-11-28 08:29:53.486063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.292 qpair failed and we were unable to recover it. 00:30:56.292 [2024-11-28 08:29:53.486408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.292 [2024-11-28 08:29:53.486437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.292 qpair failed and we were unable to recover it. 00:30:56.292 [2024-11-28 08:29:53.486692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.292 [2024-11-28 08:29:53.486720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.292 qpair failed and we were unable to recover it. 00:30:56.292 [2024-11-28 08:29:53.487091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.292 [2024-11-28 08:29:53.487121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.292 qpair failed and we were unable to recover it. 00:30:56.292 [2024-11-28 08:29:53.487513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.292 [2024-11-28 08:29:53.487543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.292 qpair failed and we were unable to recover it. 00:30:56.292 [2024-11-28 08:29:53.487876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.292 [2024-11-28 08:29:53.487906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.292 qpair failed and we were unable to recover it. 00:30:56.292 [2024-11-28 08:29:53.488263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.292 [2024-11-28 08:29:53.488294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.292 qpair failed and we were unable to recover it. 00:30:56.292 [2024-11-28 08:29:53.488660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.292 [2024-11-28 08:29:53.488690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.292 qpair failed and we were unable to recover it. 00:30:56.292 [2024-11-28 08:29:53.489050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.292 [2024-11-28 08:29:53.489079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.292 qpair failed and we were unable to recover it. 00:30:56.292 [2024-11-28 08:29:53.489413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.292 [2024-11-28 08:29:53.489445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.292 qpair failed and we were unable to recover it. 00:30:56.292 [2024-11-28 08:29:53.489690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.292 [2024-11-28 08:29:53.489718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.292 qpair failed and we were unable to recover it. 00:30:56.292 [2024-11-28 08:29:53.490080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.292 [2024-11-28 08:29:53.490109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.292 qpair failed and we were unable to recover it. 00:30:56.292 [2024-11-28 08:29:53.490367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.292 [2024-11-28 08:29:53.490397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.292 qpair failed and we were unable to recover it. 00:30:56.292 [2024-11-28 08:29:53.490755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.292 [2024-11-28 08:29:53.490785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.292 qpair failed and we were unable to recover it. 00:30:56.292 [2024-11-28 08:29:53.491142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.292 [2024-11-28 08:29:53.491180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.292 qpair failed and we were unable to recover it. 00:30:56.292 [2024-11-28 08:29:53.491534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.292 [2024-11-28 08:29:53.491562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.292 qpair failed and we were unable to recover it. 00:30:56.292 [2024-11-28 08:29:53.491930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.292 [2024-11-28 08:29:53.491965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.292 qpair failed and we were unable to recover it. 00:30:56.292 [2024-11-28 08:29:53.492330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.292 [2024-11-28 08:29:53.492360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.292 qpair failed and we were unable to recover it. 00:30:56.292 [2024-11-28 08:29:53.492715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.292 [2024-11-28 08:29:53.492743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.292 qpair failed and we were unable to recover it. 00:30:56.292 [2024-11-28 08:29:53.493098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.292 [2024-11-28 08:29:53.493127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.292 qpair failed and we were unable to recover it. 00:30:56.292 [2024-11-28 08:29:53.493486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.292 [2024-11-28 08:29:53.493515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.292 qpair failed and we were unable to recover it. 00:30:56.292 [2024-11-28 08:29:53.493878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.292 [2024-11-28 08:29:53.493906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.292 qpair failed and we were unable to recover it. 00:30:56.292 [2024-11-28 08:29:53.494266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.292 [2024-11-28 08:29:53.494298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.292 qpair failed and we were unable to recover it. 00:30:56.293 [2024-11-28 08:29:53.494664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.293 [2024-11-28 08:29:53.494694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.293 qpair failed and we were unable to recover it. 00:30:56.293 [2024-11-28 08:29:53.495036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.293 [2024-11-28 08:29:53.495064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.293 qpair failed and we were unable to recover it. 00:30:56.293 [2024-11-28 08:29:53.495429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.293 [2024-11-28 08:29:53.495459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.293 qpair failed and we were unable to recover it. 00:30:56.293 [2024-11-28 08:29:53.495819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.293 [2024-11-28 08:29:53.495848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.293 qpair failed and we were unable to recover it. 00:30:56.293 [2024-11-28 08:29:53.496212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.293 [2024-11-28 08:29:53.496242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.293 qpair failed and we were unable to recover it. 00:30:56.293 [2024-11-28 08:29:53.496576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.293 [2024-11-28 08:29:53.496604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.293 qpair failed and we were unable to recover it. 00:30:56.293 [2024-11-28 08:29:53.496852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.293 [2024-11-28 08:29:53.496880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.293 qpair failed and we were unable to recover it. 00:30:56.293 [2024-11-28 08:29:53.498696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.293 [2024-11-28 08:29:53.498759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.293 qpair failed and we were unable to recover it. 00:30:56.293 [2024-11-28 08:29:53.499215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.293 [2024-11-28 08:29:53.499251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.293 qpair failed and we were unable to recover it. 00:30:56.293 [2024-11-28 08:29:53.499661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.293 [2024-11-28 08:29:53.499692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.293 qpair failed and we were unable to recover it. 00:30:56.293 [2024-11-28 08:29:53.500111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.293 [2024-11-28 08:29:53.500144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.293 qpair failed and we were unable to recover it. 00:30:56.293 [2024-11-28 08:29:53.500516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.293 [2024-11-28 08:29:53.500547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.293 qpair failed and we were unable to recover it. 00:30:56.293 [2024-11-28 08:29:53.500915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.293 [2024-11-28 08:29:53.500944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.293 qpair failed and we were unable to recover it. 00:30:56.293 [2024-11-28 08:29:53.501326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.293 [2024-11-28 08:29:53.501358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.293 qpair failed and we were unable to recover it. 00:30:56.293 [2024-11-28 08:29:53.501733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.293 [2024-11-28 08:29:53.501762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.293 qpair failed and we were unable to recover it. 00:30:56.293 [2024-11-28 08:29:53.502135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.293 [2024-11-28 08:29:53.502172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.293 qpair failed and we were unable to recover it. 00:30:56.293 [2024-11-28 08:29:53.502423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.293 [2024-11-28 08:29:53.502452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.293 qpair failed and we were unable to recover it. 00:30:56.293 [2024-11-28 08:29:53.502877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.293 [2024-11-28 08:29:53.502912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.293 qpair failed and we were unable to recover it. 00:30:56.293 [2024-11-28 08:29:53.503199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.293 [2024-11-28 08:29:53.503230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.293 qpair failed and we were unable to recover it. 00:30:56.293 [2024-11-28 08:29:53.503593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.293 [2024-11-28 08:29:53.503622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.293 qpair failed and we were unable to recover it. 00:30:56.293 [2024-11-28 08:29:53.504003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.293 [2024-11-28 08:29:53.504032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.293 qpair failed and we were unable to recover it. 00:30:56.293 [2024-11-28 08:29:53.504270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.293 [2024-11-28 08:29:53.504300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.293 qpair failed and we were unable to recover it. 00:30:56.293 [2024-11-28 08:29:53.504633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.293 [2024-11-28 08:29:53.504663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.293 qpair failed and we were unable to recover it. 00:30:56.293 [2024-11-28 08:29:53.505527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.293 [2024-11-28 08:29:53.505574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.293 qpair failed and we were unable to recover it. 00:30:56.293 [2024-11-28 08:29:53.505870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.293 [2024-11-28 08:29:53.505906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.293 qpair failed and we were unable to recover it. 00:30:56.293 [2024-11-28 08:29:53.506344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.293 [2024-11-28 08:29:53.506377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.293 qpair failed and we were unable to recover it. 00:30:56.293 [2024-11-28 08:29:53.506713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.293 [2024-11-28 08:29:53.506743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.293 qpair failed and we were unable to recover it. 00:30:56.293 [2024-11-28 08:29:53.507112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.293 [2024-11-28 08:29:53.507141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.293 qpair failed and we were unable to recover it. 00:30:56.293 [2024-11-28 08:29:53.507352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.293 [2024-11-28 08:29:53.507381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.294 qpair failed and we were unable to recover it. 00:30:56.294 [2024-11-28 08:29:53.507757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.294 [2024-11-28 08:29:53.507785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.294 qpair failed and we were unable to recover it. 00:30:56.294 [2024-11-28 08:29:53.508025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.294 [2024-11-28 08:29:53.508054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.294 qpair failed and we were unable to recover it. 00:30:56.294 [2024-11-28 08:29:53.508311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.294 [2024-11-28 08:29:53.508341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.294 qpair failed and we were unable to recover it. 00:30:56.294 [2024-11-28 08:29:53.508709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.294 [2024-11-28 08:29:53.508738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.294 qpair failed and we were unable to recover it. 00:30:56.294 [2024-11-28 08:29:53.509098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.294 [2024-11-28 08:29:53.509136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.294 qpair failed and we were unable to recover it. 00:30:56.294 [2024-11-28 08:29:53.509502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.294 [2024-11-28 08:29:53.509533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.294 qpair failed and we were unable to recover it. 00:30:56.294 [2024-11-28 08:29:53.509892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.294 [2024-11-28 08:29:53.509921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.294 qpair failed and we were unable to recover it. 00:30:56.294 [2024-11-28 08:29:53.510285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.294 [2024-11-28 08:29:53.510317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.294 qpair failed and we were unable to recover it. 00:30:56.294 [2024-11-28 08:29:53.510696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.294 [2024-11-28 08:29:53.510726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.294 qpair failed and we were unable to recover it. 00:30:56.294 [2024-11-28 08:29:53.511081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.294 [2024-11-28 08:29:53.511110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.294 qpair failed and we were unable to recover it. 00:30:56.294 [2024-11-28 08:29:53.511493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.294 [2024-11-28 08:29:53.511523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.294 qpair failed and we were unable to recover it. 00:30:56.294 [2024-11-28 08:29:53.511886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.294 [2024-11-28 08:29:53.511914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.294 qpair failed and we were unable to recover it. 00:30:56.294 [2024-11-28 08:29:53.512278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.294 [2024-11-28 08:29:53.512307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.294 qpair failed and we were unable to recover it. 00:30:56.294 [2024-11-28 08:29:53.512719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.294 [2024-11-28 08:29:53.512748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.294 qpair failed and we were unable to recover it. 00:30:56.294 [2024-11-28 08:29:53.513079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.294 [2024-11-28 08:29:53.513108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.294 qpair failed and we were unable to recover it. 00:30:56.294 [2024-11-28 08:29:53.513470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.294 [2024-11-28 08:29:53.513500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.294 qpair failed and we were unable to recover it. 00:30:56.294 [2024-11-28 08:29:53.513873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.294 [2024-11-28 08:29:53.513902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.294 qpair failed and we were unable to recover it. 00:30:56.294 [2024-11-28 08:29:53.514271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.294 [2024-11-28 08:29:53.514303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.294 qpair failed and we were unable to recover it. 00:30:56.294 [2024-11-28 08:29:53.514546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.294 [2024-11-28 08:29:53.514578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.294 qpair failed and we were unable to recover it. 00:30:56.294 [2024-11-28 08:29:53.514942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.294 [2024-11-28 08:29:53.514972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.294 qpair failed and we were unable to recover it. 00:30:56.294 [2024-11-28 08:29:53.515309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.294 [2024-11-28 08:29:53.515339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.294 qpair failed and we were unable to recover it. 00:30:56.294 [2024-11-28 08:29:53.515580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.294 [2024-11-28 08:29:53.515609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.294 qpair failed and we were unable to recover it. 00:30:56.294 [2024-11-28 08:29:53.515949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.294 [2024-11-28 08:29:53.515978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.294 qpair failed and we were unable to recover it. 00:30:56.294 [2024-11-28 08:29:53.516343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.294 [2024-11-28 08:29:53.516373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.294 qpair failed and we were unable to recover it. 00:30:56.294 [2024-11-28 08:29:53.516721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.294 [2024-11-28 08:29:53.516750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.294 qpair failed and we were unable to recover it. 00:30:56.294 [2024-11-28 08:29:53.517109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.294 [2024-11-28 08:29:53.517138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.294 qpair failed and we were unable to recover it. 00:30:56.294 [2024-11-28 08:29:53.517519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.294 [2024-11-28 08:29:53.517551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.294 qpair failed and we were unable to recover it. 00:30:56.294 [2024-11-28 08:29:53.517905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.294 [2024-11-28 08:29:53.517934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.294 qpair failed and we were unable to recover it. 00:30:56.295 [2024-11-28 08:29:53.518296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.295 [2024-11-28 08:29:53.518327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.295 qpair failed and we were unable to recover it. 00:30:56.295 [2024-11-28 08:29:53.518693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.295 [2024-11-28 08:29:53.518723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.295 qpair failed and we were unable to recover it. 00:30:56.295 [2024-11-28 08:29:53.519081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.295 [2024-11-28 08:29:53.519110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.295 qpair failed and we were unable to recover it. 00:30:56.295 [2024-11-28 08:29:53.519474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.295 [2024-11-28 08:29:53.519504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.295 qpair failed and we were unable to recover it. 00:30:56.295 [2024-11-28 08:29:53.519850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.295 [2024-11-28 08:29:53.519879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.295 qpair failed and we were unable to recover it. 00:30:56.295 [2024-11-28 08:29:53.520238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.295 [2024-11-28 08:29:53.520267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.295 qpair failed and we were unable to recover it. 00:30:56.295 [2024-11-28 08:29:53.520641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.295 [2024-11-28 08:29:53.520670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.295 qpair failed and we were unable to recover it. 00:30:56.295 [2024-11-28 08:29:53.521030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.295 [2024-11-28 08:29:53.521058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.295 qpair failed and we were unable to recover it. 00:30:56.295 [2024-11-28 08:29:53.521424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.295 [2024-11-28 08:29:53.521464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.295 qpair failed and we were unable to recover it. 00:30:56.295 [2024-11-28 08:29:53.521827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.295 [2024-11-28 08:29:53.521856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.295 qpair failed and we were unable to recover it. 00:30:56.295 [2024-11-28 08:29:53.522215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.295 [2024-11-28 08:29:53.522245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.295 qpair failed and we were unable to recover it. 00:30:56.295 [2024-11-28 08:29:53.522608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.295 [2024-11-28 08:29:53.522637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.295 qpair failed and we were unable to recover it. 00:30:56.295 [2024-11-28 08:29:53.523000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.295 [2024-11-28 08:29:53.523029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.295 qpair failed and we were unable to recover it. 00:30:56.295 [2024-11-28 08:29:53.523410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.295 [2024-11-28 08:29:53.523440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.295 qpair failed and we were unable to recover it. 00:30:56.295 [2024-11-28 08:29:53.523809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.295 [2024-11-28 08:29:53.523838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.295 qpair failed and we were unable to recover it. 00:30:56.295 [2024-11-28 08:29:53.524211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.295 [2024-11-28 08:29:53.524241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.295 qpair failed and we were unable to recover it. 00:30:56.295 [2024-11-28 08:29:53.524608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.295 [2024-11-28 08:29:53.524643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.295 qpair failed and we were unable to recover it. 00:30:56.295 [2024-11-28 08:29:53.524983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.295 [2024-11-28 08:29:53.525013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.295 qpair failed and we were unable to recover it. 00:30:56.295 [2024-11-28 08:29:53.525375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.295 [2024-11-28 08:29:53.525404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.295 qpair failed and we were unable to recover it. 00:30:56.295 [2024-11-28 08:29:53.525682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.295 [2024-11-28 08:29:53.525711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.295 qpair failed and we were unable to recover it. 00:30:56.295 [2024-11-28 08:29:53.525977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.295 [2024-11-28 08:29:53.526007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.295 qpair failed and we were unable to recover it. 00:30:56.295 [2024-11-28 08:29:53.526295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.295 [2024-11-28 08:29:53.526325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.295 qpair failed and we were unable to recover it. 00:30:56.295 [2024-11-28 08:29:53.526659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.295 [2024-11-28 08:29:53.526688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.295 qpair failed and we were unable to recover it. 00:30:56.295 [2024-11-28 08:29:53.527049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.295 [2024-11-28 08:29:53.527078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.295 qpair failed and we were unable to recover it. 00:30:56.295 [2024-11-28 08:29:53.527431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.296 [2024-11-28 08:29:53.527468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.296 qpair failed and we were unable to recover it. 00:30:56.296 [2024-11-28 08:29:53.527707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.296 [2024-11-28 08:29:53.527736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.296 qpair failed and we were unable to recover it. 00:30:56.296 [2024-11-28 08:29:53.528104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.296 [2024-11-28 08:29:53.528133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.296 qpair failed and we were unable to recover it. 00:30:56.296 [2024-11-28 08:29:53.528508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.296 [2024-11-28 08:29:53.528538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.296 qpair failed and we were unable to recover it. 00:30:56.296 [2024-11-28 08:29:53.528901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.296 [2024-11-28 08:29:53.528929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.296 qpair failed and we were unable to recover it. 00:30:56.296 [2024-11-28 08:29:53.529283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.296 [2024-11-28 08:29:53.529314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.296 qpair failed and we were unable to recover it. 00:30:56.296 [2024-11-28 08:29:53.529670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.296 [2024-11-28 08:29:53.529698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.296 qpair failed and we were unable to recover it. 00:30:56.296 [2024-11-28 08:29:53.530059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.296 [2024-11-28 08:29:53.530088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.296 qpair failed and we were unable to recover it. 00:30:56.296 [2024-11-28 08:29:53.530462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.296 [2024-11-28 08:29:53.530499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.296 qpair failed and we were unable to recover it. 00:30:56.296 [2024-11-28 08:29:53.530747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.296 [2024-11-28 08:29:53.530776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.296 qpair failed and we were unable to recover it. 00:30:56.296 [2024-11-28 08:29:53.531179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.296 [2024-11-28 08:29:53.531209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.296 qpair failed and we were unable to recover it. 00:30:56.296 [2024-11-28 08:29:53.531577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.296 [2024-11-28 08:29:53.531606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.296 qpair failed and we were unable to recover it. 00:30:56.296 [2024-11-28 08:29:53.531970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.296 [2024-11-28 08:29:53.532009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.296 qpair failed and we were unable to recover it. 00:30:56.296 [2024-11-28 08:29:53.532385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.296 [2024-11-28 08:29:53.532414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.296 qpair failed and we were unable to recover it. 00:30:56.296 [2024-11-28 08:29:53.532787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.296 [2024-11-28 08:29:53.532816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.296 qpair failed and we were unable to recover it. 00:30:56.296 [2024-11-28 08:29:53.533178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.296 [2024-11-28 08:29:53.533209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.296 qpair failed and we were unable to recover it. 00:30:56.296 [2024-11-28 08:29:53.533565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.296 [2024-11-28 08:29:53.533594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.296 qpair failed and we were unable to recover it. 00:30:56.296 [2024-11-28 08:29:53.533942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.296 [2024-11-28 08:29:53.533971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.296 qpair failed and we were unable to recover it. 00:30:56.296 [2024-11-28 08:29:53.534335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.296 [2024-11-28 08:29:53.534366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.296 qpair failed and we were unable to recover it. 00:30:56.296 [2024-11-28 08:29:53.534728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.296 [2024-11-28 08:29:53.534757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.296 qpair failed and we were unable to recover it. 00:30:56.296 [2024-11-28 08:29:53.535106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.296 [2024-11-28 08:29:53.535135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.296 qpair failed and we were unable to recover it. 00:30:56.296 [2024-11-28 08:29:53.535507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.296 [2024-11-28 08:29:53.535537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.296 qpair failed and we were unable to recover it. 00:30:56.296 [2024-11-28 08:29:53.535885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.296 [2024-11-28 08:29:53.535914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.296 qpair failed and we were unable to recover it. 00:30:56.296 [2024-11-28 08:29:53.536276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.296 [2024-11-28 08:29:53.536306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.296 qpair failed and we were unable to recover it. 00:30:56.296 [2024-11-28 08:29:53.536660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.296 [2024-11-28 08:29:53.536689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.296 qpair failed and we were unable to recover it. 00:30:56.296 [2024-11-28 08:29:53.537072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.296 [2024-11-28 08:29:53.537110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.296 qpair failed and we were unable to recover it. 00:30:56.296 [2024-11-28 08:29:53.537468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.296 [2024-11-28 08:29:53.537498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.296 qpair failed and we were unable to recover it. 00:30:56.296 [2024-11-28 08:29:53.537857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.296 [2024-11-28 08:29:53.537885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.296 qpair failed and we were unable to recover it. 00:30:56.296 [2024-11-28 08:29:53.538249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.296 [2024-11-28 08:29:53.538280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.296 qpair failed and we were unable to recover it. 00:30:56.296 [2024-11-28 08:29:53.538610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.296 [2024-11-28 08:29:53.538639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.296 qpair failed and we were unable to recover it. 00:30:56.296 [2024-11-28 08:29:53.539000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.296 [2024-11-28 08:29:53.539029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.296 qpair failed and we were unable to recover it. 00:30:56.297 [2024-11-28 08:29:53.539397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.297 [2024-11-28 08:29:53.539469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.297 qpair failed and we were unable to recover it. 00:30:56.297 [2024-11-28 08:29:53.539827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.297 [2024-11-28 08:29:53.539863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.297 qpair failed and we were unable to recover it. 00:30:56.297 [2024-11-28 08:29:53.540203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.297 [2024-11-28 08:29:53.540232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.297 qpair failed and we were unable to recover it. 00:30:56.297 [2024-11-28 08:29:53.540596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.297 [2024-11-28 08:29:53.540624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.297 qpair failed and we were unable to recover it. 00:30:56.297 [2024-11-28 08:29:53.540993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.297 [2024-11-28 08:29:53.541022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.297 qpair failed and we were unable to recover it. 00:30:56.297 [2024-11-28 08:29:53.541446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.297 [2024-11-28 08:29:53.541476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.297 qpair failed and we were unable to recover it. 00:30:56.297 [2024-11-28 08:29:53.541837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.297 [2024-11-28 08:29:53.541867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.297 qpair failed and we were unable to recover it. 00:30:56.297 [2024-11-28 08:29:53.542230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.297 [2024-11-28 08:29:53.542261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.297 qpair failed and we were unable to recover it. 00:30:56.297 [2024-11-28 08:29:53.542652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.297 [2024-11-28 08:29:53.542684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.576 qpair failed and we were unable to recover it. 00:30:56.576 [2024-11-28 08:29:53.543062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.576 [2024-11-28 08:29:53.543096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.576 qpair failed and we were unable to recover it. 00:30:56.576 [2024-11-28 08:29:53.543335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.576 [2024-11-28 08:29:53.543367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.576 qpair failed and we were unable to recover it. 00:30:56.576 [2024-11-28 08:29:53.543746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.576 [2024-11-28 08:29:53.543776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.576 qpair failed and we were unable to recover it. 00:30:56.576 [2024-11-28 08:29:53.544138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.576 [2024-11-28 08:29:53.544195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.576 qpair failed and we were unable to recover it. 00:30:56.576 [2024-11-28 08:29:53.544522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.576 [2024-11-28 08:29:53.544553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.576 qpair failed and we were unable to recover it. 00:30:56.576 [2024-11-28 08:29:53.544916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.576 [2024-11-28 08:29:53.544945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.576 qpair failed and we were unable to recover it. 00:30:56.577 [2024-11-28 08:29:53.545305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.577 [2024-11-28 08:29:53.545336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.577 qpair failed and we were unable to recover it. 00:30:56.577 [2024-11-28 08:29:53.545689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.577 [2024-11-28 08:29:53.545718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.577 qpair failed and we were unable to recover it. 00:30:56.577 [2024-11-28 08:29:53.546085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.577 [2024-11-28 08:29:53.546114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.577 qpair failed and we were unable to recover it. 00:30:56.577 [2024-11-28 08:29:53.546447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.577 [2024-11-28 08:29:53.546478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.577 qpair failed and we were unable to recover it. 00:30:56.577 [2024-11-28 08:29:53.546817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.577 [2024-11-28 08:29:53.546846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.577 qpair failed and we were unable to recover it. 00:30:56.577 [2024-11-28 08:29:53.547220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.577 [2024-11-28 08:29:53.547252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.577 qpair failed and we were unable to recover it. 00:30:56.577 [2024-11-28 08:29:53.547565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.577 [2024-11-28 08:29:53.547593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.577 qpair failed and we were unable to recover it. 00:30:56.577 [2024-11-28 08:29:53.547947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.577 [2024-11-28 08:29:53.547976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.577 qpair failed and we were unable to recover it. 00:30:56.577 [2024-11-28 08:29:53.548343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.577 [2024-11-28 08:29:53.548374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.577 qpair failed and we were unable to recover it. 00:30:56.577 [2024-11-28 08:29:53.548731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.577 [2024-11-28 08:29:53.548760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.577 qpair failed and we were unable to recover it. 00:30:56.577 [2024-11-28 08:29:53.549122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.577 [2024-11-28 08:29:53.549152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.577 qpair failed and we were unable to recover it. 00:30:56.577 [2024-11-28 08:29:53.549538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.577 [2024-11-28 08:29:53.549569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.577 qpair failed and we were unable to recover it. 00:30:56.577 [2024-11-28 08:29:53.549923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.577 [2024-11-28 08:29:53.549953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.577 qpair failed and we were unable to recover it. 00:30:56.577 [2024-11-28 08:29:53.550312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.577 [2024-11-28 08:29:53.550343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.577 qpair failed and we were unable to recover it. 00:30:56.577 [2024-11-28 08:29:53.550691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.577 [2024-11-28 08:29:53.550721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.577 qpair failed and we were unable to recover it. 00:30:56.577 [2024-11-28 08:29:53.551085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.577 [2024-11-28 08:29:53.551114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.577 qpair failed and we were unable to recover it. 00:30:56.577 [2024-11-28 08:29:53.551478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.577 [2024-11-28 08:29:53.551509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.577 qpair failed and we were unable to recover it. 00:30:56.577 [2024-11-28 08:29:53.551877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.577 [2024-11-28 08:29:53.551906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.577 qpair failed and we were unable to recover it. 00:30:56.577 [2024-11-28 08:29:53.552274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.577 [2024-11-28 08:29:53.552304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.577 qpair failed and we were unable to recover it. 00:30:56.577 [2024-11-28 08:29:53.552669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.577 [2024-11-28 08:29:53.552699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.577 qpair failed and we were unable to recover it. 00:30:56.577 [2024-11-28 08:29:53.553056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.577 [2024-11-28 08:29:53.553085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.577 qpair failed and we were unable to recover it. 00:30:56.577 [2024-11-28 08:29:53.553439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.577 [2024-11-28 08:29:53.553469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.577 qpair failed and we were unable to recover it. 00:30:56.577 [2024-11-28 08:29:53.553836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.577 [2024-11-28 08:29:53.553866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.577 qpair failed and we were unable to recover it. 00:30:56.577 [2024-11-28 08:29:53.554097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.577 [2024-11-28 08:29:53.554126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.577 qpair failed and we were unable to recover it. 00:30:56.577 [2024-11-28 08:29:53.554535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.577 [2024-11-28 08:29:53.554567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.577 qpair failed and we were unable to recover it. 00:30:56.577 [2024-11-28 08:29:53.554930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.577 [2024-11-28 08:29:53.554960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.577 qpair failed and we were unable to recover it. 00:30:56.577 [2024-11-28 08:29:53.555328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.577 [2024-11-28 08:29:53.555358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.577 qpair failed and we were unable to recover it. 00:30:56.577 [2024-11-28 08:29:53.555728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.577 [2024-11-28 08:29:53.555758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.577 qpair failed and we were unable to recover it. 00:30:56.577 [2024-11-28 08:29:53.556138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.577 [2024-11-28 08:29:53.556180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.577 qpair failed and we were unable to recover it. 00:30:56.577 [2024-11-28 08:29:53.556549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.577 [2024-11-28 08:29:53.556580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.577 qpair failed and we were unable to recover it. 00:30:56.577 [2024-11-28 08:29:53.556943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.577 [2024-11-28 08:29:53.556972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.577 qpair failed and we were unable to recover it. 00:30:56.577 [2024-11-28 08:29:53.557332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.577 [2024-11-28 08:29:53.557363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.577 qpair failed and we were unable to recover it. 00:30:56.577 [2024-11-28 08:29:53.557722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.577 [2024-11-28 08:29:53.557750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.577 qpair failed and we were unable to recover it. 00:30:56.577 [2024-11-28 08:29:53.558079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.577 [2024-11-28 08:29:53.558108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.577 qpair failed and we were unable to recover it. 00:30:56.577 [2024-11-28 08:29:53.558389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.577 [2024-11-28 08:29:53.558419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.577 qpair failed and we were unable to recover it. 00:30:56.577 [2024-11-28 08:29:53.558825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.577 [2024-11-28 08:29:53.558854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.577 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.559191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.559223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.559585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.559614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.559976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.560007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.560358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.560389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.560757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.560786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.561148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.561188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.561493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.561522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.561883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.561912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.562263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.562295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.562660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.562689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.563066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.563095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.563444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.563476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.563855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.563884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.564239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.564268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.564630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.564659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.565028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.565057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.565404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.565435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.565682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.565717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.565971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.566000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.566351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.566381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.566747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.566777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.566955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.566988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.567383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.567413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.567780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.567809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.568189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.568225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.568584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.568614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.569046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.569075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.569407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.569437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.569796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.569824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.570196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.570226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.570606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.570635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.570999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.571028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.571271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.571301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.571620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.571649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.572014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.572043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.572397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.572427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.572766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.578 [2024-11-28 08:29:53.572794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.578 qpair failed and we were unable to recover it. 00:30:56.578 [2024-11-28 08:29:53.573224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.573254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.573601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.573631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.573993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.574021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.574386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.574416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.574769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.574797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.575173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.575203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.575560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.575592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.575939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.575970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.576324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.576355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.576692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.576724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.577083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.577113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.577495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.577526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.577895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.577923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.578293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.578323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.578703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.578737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.579075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.579111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.579499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.579530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.579893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.579923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.580265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.580295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.580635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.580663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.581011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.581053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.581428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.581459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.581822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.581850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.582216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.582247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.582611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.582639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.583014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.583042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.583389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.583419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.583789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.583827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.584192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.584222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.584580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.584609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.584970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.584999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.585343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.585375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.585740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.585768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.586138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.586192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.586462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.586493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.586842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.586870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.587234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.587265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.579 qpair failed and we were unable to recover it. 00:30:56.579 [2024-11-28 08:29:53.587629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.579 [2024-11-28 08:29:53.587666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.588029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.588058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.588391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.588421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.588841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.588869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.589232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.589263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.589619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.589651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.589991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.590022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.590396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.590427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.590795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.590824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.591199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.591228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.591583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.591616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.591982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.592011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.592425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.592455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.592812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.592851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.593204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.593235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.593653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.593681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.594056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.594085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.594429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.594459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.594821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.594849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.595204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.595235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.595627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.595656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.596018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.596047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.596419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.596449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.596820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.596859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.597242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.597272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.597652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.597682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.598024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.598053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.598409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.598440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.598876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.598905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.599263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.599294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.599668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.599697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.600060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.600088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.600461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.600492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.600871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.600900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.601265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.601294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.601658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.601686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.602054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.602084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.580 [2024-11-28 08:29:53.602456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.580 [2024-11-28 08:29:53.602487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.580 qpair failed and we were unable to recover it. 00:30:56.581 [2024-11-28 08:29:53.602850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.581 [2024-11-28 08:29:53.602879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.581 qpair failed and we were unable to recover it. 00:30:56.581 [2024-11-28 08:29:53.603254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.581 [2024-11-28 08:29:53.603285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.581 qpair failed and we were unable to recover it. 00:30:56.581 [2024-11-28 08:29:53.603716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.581 [2024-11-28 08:29:53.603745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.581 qpair failed and we were unable to recover it. 00:30:56.581 [2024-11-28 08:29:53.604079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.581 [2024-11-28 08:29:53.604108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.581 qpair failed and we were unable to recover it. 00:30:56.581 [2024-11-28 08:29:53.604456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.581 [2024-11-28 08:29:53.604485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.581 qpair failed and we were unable to recover it. 00:30:56.581 [2024-11-28 08:29:53.604834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.581 [2024-11-28 08:29:53.604863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.581 qpair failed and we were unable to recover it. 00:30:56.581 [2024-11-28 08:29:53.605231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.581 [2024-11-28 08:29:53.605262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.581 qpair failed and we were unable to recover it. 00:30:56.581 [2024-11-28 08:29:53.605629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.581 [2024-11-28 08:29:53.605661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.581 qpair failed and we were unable to recover it. 00:30:56.581 [2024-11-28 08:29:53.606028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.581 [2024-11-28 08:29:53.606057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.581 qpair failed and we were unable to recover it. 00:30:56.581 [2024-11-28 08:29:53.606403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.581 [2024-11-28 08:29:53.606433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.581 qpair failed and we were unable to recover it. 00:30:56.581 [2024-11-28 08:29:53.606801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.581 [2024-11-28 08:29:53.606829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.581 qpair failed and we were unable to recover it. 00:30:56.581 [2024-11-28 08:29:53.607244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.581 [2024-11-28 08:29:53.607274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.581 qpair failed and we were unable to recover it. 00:30:56.581 [2024-11-28 08:29:53.607621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.581 [2024-11-28 08:29:53.607651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.581 qpair failed and we were unable to recover it. 00:30:56.581 [2024-11-28 08:29:53.608021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.581 [2024-11-28 08:29:53.608051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.581 qpair failed and we were unable to recover it. 00:30:56.581 [2024-11-28 08:29:53.608388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.581 [2024-11-28 08:29:53.608417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.581 qpair failed and we were unable to recover it. 00:30:56.581 [2024-11-28 08:29:53.608780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.581 [2024-11-28 08:29:53.608808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.581 qpair failed and we were unable to recover it. 00:30:56.581 [2024-11-28 08:29:53.609185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.581 [2024-11-28 08:29:53.609215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.581 qpair failed and we were unable to recover it. 00:30:56.581 [2024-11-28 08:29:53.609557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.581 [2024-11-28 08:29:53.609586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.581 qpair failed and we were unable to recover it. 00:30:56.581 [2024-11-28 08:29:53.609938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.581 [2024-11-28 08:29:53.609967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.581 qpair failed and we were unable to recover it. 00:30:56.581 [2024-11-28 08:29:53.610332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.581 [2024-11-28 08:29:53.610363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.581 qpair failed and we were unable to recover it. 00:30:56.581 [2024-11-28 08:29:53.610723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.581 [2024-11-28 08:29:53.610761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.581 qpair failed and we were unable to recover it. 00:30:56.581 [2024-11-28 08:29:53.611119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.581 [2024-11-28 08:29:53.611150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.581 qpair failed and we were unable to recover it. 00:30:56.581 [2024-11-28 08:29:53.611525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.581 [2024-11-28 08:29:53.611554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.581 qpair failed and we were unable to recover it. 00:30:56.581 [2024-11-28 08:29:53.611920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.581 [2024-11-28 08:29:53.611951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.581 qpair failed and we were unable to recover it. 00:30:56.581 [2024-11-28 08:29:53.612309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.581 [2024-11-28 08:29:53.612339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.581 qpair failed and we were unable to recover it. 00:30:56.581 [2024-11-28 08:29:53.612609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.581 [2024-11-28 08:29:53.612643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.581 qpair failed and we were unable to recover it. 00:30:56.581 [2024-11-28 08:29:53.612973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.581 [2024-11-28 08:29:53.613002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.581 qpair failed and we were unable to recover it. 00:30:56.581 [2024-11-28 08:29:53.613246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.581 [2024-11-28 08:29:53.613276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.581 qpair failed and we were unable to recover it. 00:30:56.581 [2024-11-28 08:29:53.613624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.581 [2024-11-28 08:29:53.613652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.581 qpair failed and we were unable to recover it. 00:30:56.581 [2024-11-28 08:29:53.613895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.581 [2024-11-28 08:29:53.613927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.581 qpair failed and we were unable to recover it. 00:30:56.581 [2024-11-28 08:29:53.614276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.581 [2024-11-28 08:29:53.614306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.581 qpair failed and we were unable to recover it. 00:30:56.581 [2024-11-28 08:29:53.614676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.614704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.615070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.615107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.615488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.615519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.615892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.615923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.616280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.616311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.616677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.616706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.617069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.617100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.617456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.617487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.617827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.617856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.618221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.618252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.618624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.618660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.619027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.619055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.619415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.619445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.619682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.619710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.620139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.620178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.620545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.620574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.620936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.620964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.621223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.621252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.621612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.621641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.622050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.622079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.622410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.622443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.622802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.622831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.623213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.623244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.623647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.623675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.624016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.624046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.624304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.624334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.624581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.624609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.625008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.625037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.625360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.625398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.625783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.625811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.626196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.626227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.626612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.626642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.627007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.627036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.627410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.627441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.627823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.627858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.628202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.628233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.628468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.628500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.628838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.582 [2024-11-28 08:29:53.628868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.582 qpair failed and we were unable to recover it. 00:30:56.582 [2024-11-28 08:29:53.629243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.629273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.629647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.629680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.630030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.630060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.630430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.630462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.630816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.630847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.631208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.631238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.631619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.631648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.632013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.632041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.632410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.632449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.632813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.632841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.633273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.633303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.633661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.633691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.634125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.634154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.634541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.634573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.634930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.634960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.635340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.635371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.635669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.635699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.635948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.635979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.636231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.636263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.636709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.636739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.637093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.637123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.637495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.637526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.637897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.637928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.638303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.638334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.638728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.638757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.639118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.639147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.639595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.639624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.639994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.640022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.640434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.640463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.640822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.640850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.641209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.641238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.641615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.641647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.641904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.641935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.642200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.642230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.642609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.642638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.643002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.643032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.643394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.643430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.583 [2024-11-28 08:29:53.643827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.583 [2024-11-28 08:29:53.643856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.583 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.644222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.644252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.644638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.644674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.645011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.645040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.645437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.645468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.645832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.645859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.646233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.646263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.646643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.646673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.647037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.647066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.647383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.647419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.647798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.647827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.648184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.648213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.648561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.648590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.648974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.649003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.649252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.649281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.649652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.649681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.650109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.650138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.650515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.650548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.650979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.651008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.651431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.651462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.651766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.651795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.652174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.652204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.652620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.652650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.652983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.653012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.653431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.653461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.653823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.653853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.654232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.654262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.654630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.654659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.654895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.654924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.655230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.655260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.655524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.655553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.655916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.655945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.656326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.656355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.656723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.656752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.657008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.657037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.657459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.657489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.657859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.657888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.658262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.584 [2024-11-28 08:29:53.658292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.584 qpair failed and we were unable to recover it. 00:30:56.584 [2024-11-28 08:29:53.658666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.658694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.659120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.659170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.659513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.659544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.659882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.659911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.660177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.660206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.660585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.660614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.660979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.661007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.661385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.661417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.661665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.661694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.662071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.662100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.662464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.662500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.662757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.662787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.663171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.663202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.663641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.663670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.663928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.663956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.664204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.664236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.664687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.664716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.665073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.665102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.665469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.665501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.665857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.665886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.666141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.666195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.666540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.666570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.666924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.666952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.667323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.667353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.667720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.667749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.668128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.668157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.668325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.668354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.668726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.668756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.668986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.669016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.669400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.669430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.669893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.669921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.670275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.670306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.670574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.670602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.670956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.670986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.671355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.671386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.671733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.671764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.672137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.672172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.585 [2024-11-28 08:29:53.672388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.585 [2024-11-28 08:29:53.672418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.585 qpair failed and we were unable to recover it. 00:30:56.586 [2024-11-28 08:29:53.672676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.586 [2024-11-28 08:29:53.672706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.586 qpair failed and we were unable to recover it. 00:30:56.586 [2024-11-28 08:29:53.673078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.586 [2024-11-28 08:29:53.673107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.586 qpair failed and we were unable to recover it. 00:30:56.586 [2024-11-28 08:29:53.673475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.586 [2024-11-28 08:29:53.673505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.586 qpair failed and we were unable to recover it. 00:30:56.586 [2024-11-28 08:29:53.673889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.586 [2024-11-28 08:29:53.673925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.586 qpair failed and we were unable to recover it. 00:30:56.586 [2024-11-28 08:29:53.674280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.586 [2024-11-28 08:29:53.674310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.586 qpair failed and we were unable to recover it. 00:30:56.586 [2024-11-28 08:29:53.674662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.586 [2024-11-28 08:29:53.674692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.586 qpair failed and we were unable to recover it. 00:30:56.586 [2024-11-28 08:29:53.674943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.586 [2024-11-28 08:29:53.674975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.586 qpair failed and we were unable to recover it. 00:30:56.586 [2024-11-28 08:29:53.675224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.586 [2024-11-28 08:29:53.675256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.586 qpair failed and we were unable to recover it. 00:30:56.586 [2024-11-28 08:29:53.675592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.586 [2024-11-28 08:29:53.675622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.586 qpair failed and we were unable to recover it. 00:30:56.586 [2024-11-28 08:29:53.676034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.586 [2024-11-28 08:29:53.676062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.586 qpair failed and we were unable to recover it. 00:30:56.586 [2024-11-28 08:29:53.676405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.586 [2024-11-28 08:29:53.676434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.586 qpair failed and we were unable to recover it. 00:30:56.586 [2024-11-28 08:29:53.676821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.586 [2024-11-28 08:29:53.676851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.586 qpair failed and we were unable to recover it. 00:30:56.586 [2024-11-28 08:29:53.677181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.586 [2024-11-28 08:29:53.677212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.586 qpair failed and we were unable to recover it. 00:30:56.586 [2024-11-28 08:29:53.677571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.586 [2024-11-28 08:29:53.677601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.586 qpair failed and we were unable to recover it. 00:30:56.586 [2024-11-28 08:29:53.677974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.586 [2024-11-28 08:29:53.678004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.586 qpair failed and we were unable to recover it. 00:30:56.586 [2024-11-28 08:29:53.678368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.586 [2024-11-28 08:29:53.678399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.586 qpair failed and we were unable to recover it. 00:30:56.586 [2024-11-28 08:29:53.678764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.586 [2024-11-28 08:29:53.678795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.586 qpair failed and we were unable to recover it. 00:30:56.586 [2024-11-28 08:29:53.679132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.586 [2024-11-28 08:29:53.679171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.586 qpair failed and we were unable to recover it. 00:30:56.586 [2024-11-28 08:29:53.679522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.586 [2024-11-28 08:29:53.679551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.586 qpair failed and we were unable to recover it. 00:30:56.586 [2024-11-28 08:29:53.679919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.586 [2024-11-28 08:29:53.679948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.586 qpair failed and we were unable to recover it. 00:30:56.586 [2024-11-28 08:29:53.680200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.586 [2024-11-28 08:29:53.680231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.586 qpair failed and we were unable to recover it. 00:30:56.586 [2024-11-28 08:29:53.680566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.586 [2024-11-28 08:29:53.680595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.586 qpair failed and we were unable to recover it. 00:30:56.586 [2024-11-28 08:29:53.680977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.586 [2024-11-28 08:29:53.681008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.586 qpair failed and we were unable to recover it. 00:30:56.586 [2024-11-28 08:29:53.681330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.586 [2024-11-28 08:29:53.681361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.586 qpair failed and we were unable to recover it. 00:30:56.586 [2024-11-28 08:29:53.681698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.586 [2024-11-28 08:29:53.681728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.586 qpair failed and we were unable to recover it. 00:30:56.586 [2024-11-28 08:29:53.682108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.586 [2024-11-28 08:29:53.682136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.586 qpair failed and we were unable to recover it. 00:30:56.586 [2024-11-28 08:29:53.682589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.586 [2024-11-28 08:29:53.682618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.586 qpair failed and we were unable to recover it. 00:30:56.586 [2024-11-28 08:29:53.682991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.586 [2024-11-28 08:29:53.683019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.586 qpair failed and we were unable to recover it. 00:30:56.586 [2024-11-28 08:29:53.683388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.586 [2024-11-28 08:29:53.683417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.586 qpair failed and we were unable to recover it. 00:30:56.586 [2024-11-28 08:29:53.683799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.586 [2024-11-28 08:29:53.683829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.586 qpair failed and we were unable to recover it. 00:30:56.586 [2024-11-28 08:29:53.684153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.586 [2024-11-28 08:29:53.684195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.586 qpair failed and we were unable to recover it. 00:30:56.586 [2024-11-28 08:29:53.684601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.586 [2024-11-28 08:29:53.684631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.586 qpair failed and we were unable to recover it. 00:30:56.586 [2024-11-28 08:29:53.685041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.586 [2024-11-28 08:29:53.685070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.685421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.685452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.685816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.685845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.686219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.686248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.686598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.686627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.686981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.687009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.687365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.687396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.687768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.687798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.688198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.688231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.688596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.688625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.688894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.688923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.689278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.689317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.689686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.689714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.690085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.690115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.690502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.690533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.690913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.690942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.691183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.691213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.691633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.691662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.692030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.692058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.692337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.692366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.692746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.692776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.693090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.693119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.693369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.693402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.693740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.693770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.694142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.694191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.694572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.694602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.694974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.695004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.695439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.695471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.695889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.695918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.696334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.696364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.696715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.696745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.696980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.697008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.697394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.697424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.697682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.697714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.698103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.698133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.698390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.698420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.698642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.698671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.699103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.587 [2024-11-28 08:29:53.699132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.587 qpair failed and we were unable to recover it. 00:30:56.587 [2024-11-28 08:29:53.699530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.699561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.699920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.699958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.700331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.700361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.700717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.700746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.701117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.701145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.701530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.701558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.701927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.701957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.702327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.702357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.702704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.702732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.703105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.703135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.703559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.703588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.703938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.703967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.704198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.704227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.704514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.704549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.704919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.704948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.705369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.705398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.705761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.705791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.706171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.706202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.706461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.706492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.706834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.706864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.707207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.707239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.707629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.707657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.708026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.708055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.708391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.708423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.708801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.708830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.709201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.709230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.709592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.709621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.709992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.710021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.710395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.710427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.710803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.710833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.711182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.711213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.711591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.711620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.711990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.712018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.712396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.712434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.712753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.712781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.713147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.713186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.713457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.713485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.588 [2024-11-28 08:29:53.713833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.588 [2024-11-28 08:29:53.713862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.588 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.714227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.714257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.714629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.714657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.715011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.715044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.715397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.715427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.715786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.715815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.716180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.716209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.716578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.716608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.716960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.716990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.717384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.717420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.717781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.717810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.718180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.718210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.718574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.718602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.718949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.718978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.719335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.719366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.719613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.719642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.720006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.720041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.720400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.720431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.720795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.720823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.721189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.721219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.721583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.721612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.721975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.722005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.722368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.722398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.722754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.722783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.723145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.723184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.723549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.723578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.723889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.723919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.724303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.724333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.724697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.724727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.725086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.725114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.725452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.725483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.725916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.725945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.726194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.726226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.726488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.726517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.726880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.726908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.727290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.727320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.727688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.727717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.727961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.727989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.589 qpair failed and we were unable to recover it. 00:30:56.589 [2024-11-28 08:29:53.728340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.589 [2024-11-28 08:29:53.728370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.590 qpair failed and we were unable to recover it. 00:30:56.590 [2024-11-28 08:29:53.728727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.590 [2024-11-28 08:29:53.728757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.590 qpair failed and we were unable to recover it. 00:30:56.590 [2024-11-28 08:29:53.729122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.590 [2024-11-28 08:29:53.729150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.590 qpair failed and we were unable to recover it. 00:30:56.590 [2024-11-28 08:29:53.729514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.590 [2024-11-28 08:29:53.729544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.590 qpair failed and we were unable to recover it. 00:30:56.590 [2024-11-28 08:29:53.729846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.590 [2024-11-28 08:29:53.729877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.590 qpair failed and we were unable to recover it. 00:30:56.590 [2024-11-28 08:29:53.730183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.590 [2024-11-28 08:29:53.730214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.590 qpair failed and we were unable to recover it. 00:30:56.590 [2024-11-28 08:29:53.730653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.590 [2024-11-28 08:29:53.730682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.590 qpair failed and we were unable to recover it. 00:30:56.590 [2024-11-28 08:29:53.730925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.590 [2024-11-28 08:29:53.730957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.590 qpair failed and we were unable to recover it. 00:30:56.590 [2024-11-28 08:29:53.731325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.590 [2024-11-28 08:29:53.731355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.590 qpair failed and we were unable to recover it. 00:30:56.590 [2024-11-28 08:29:53.731770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.590 [2024-11-28 08:29:53.731799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.590 qpair failed and we were unable to recover it. 00:30:56.590 [2024-11-28 08:29:53.732152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.590 [2024-11-28 08:29:53.732192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.590 qpair failed and we were unable to recover it. 00:30:56.590 [2024-11-28 08:29:53.732548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.590 [2024-11-28 08:29:53.732577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.590 qpair failed and we were unable to recover it. 00:30:56.590 [2024-11-28 08:29:53.732940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.590 [2024-11-28 08:29:53.732969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.590 qpair failed and we were unable to recover it. 00:30:56.590 [2024-11-28 08:29:53.733373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.590 [2024-11-28 08:29:53.733404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.590 qpair failed and we were unable to recover it. 00:30:56.590 [2024-11-28 08:29:53.733732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.590 [2024-11-28 08:29:53.733761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.590 qpair failed and we were unable to recover it. 00:30:56.590 [2024-11-28 08:29:53.734195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.590 [2024-11-28 08:29:53.734225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.590 qpair failed and we were unable to recover it. 00:30:56.590 [2024-11-28 08:29:53.734596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.590 [2024-11-28 08:29:53.734625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.590 qpair failed and we were unable to recover it. 00:30:56.590 [2024-11-28 08:29:53.734987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.590 [2024-11-28 08:29:53.735017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.590 qpair failed and we were unable to recover it. 00:30:56.590 [2024-11-28 08:29:53.735379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.590 [2024-11-28 08:29:53.735421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.590 qpair failed and we were unable to recover it. 00:30:56.590 [2024-11-28 08:29:53.735781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.590 [2024-11-28 08:29:53.735810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.590 qpair failed and we were unable to recover it. 00:30:56.590 [2024-11-28 08:29:53.736179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.590 [2024-11-28 08:29:53.736208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.590 qpair failed and we were unable to recover it. 00:30:56.590 [2024-11-28 08:29:53.736537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.590 [2024-11-28 08:29:53.736566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.590 qpair failed and we were unable to recover it. 00:30:56.590 [2024-11-28 08:29:53.736927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.590 [2024-11-28 08:29:53.736954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.590 qpair failed and we were unable to recover it. 00:30:56.590 [2024-11-28 08:29:53.737311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.590 [2024-11-28 08:29:53.737347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.590 qpair failed and we were unable to recover it. 00:30:56.590 [2024-11-28 08:29:53.737731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.590 [2024-11-28 08:29:53.737761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.590 qpair failed and we were unable to recover it. 00:30:56.590 [2024-11-28 08:29:53.738125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.590 [2024-11-28 08:29:53.738154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.590 qpair failed and we were unable to recover it. 00:30:56.590 [2024-11-28 08:29:53.738514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.590 [2024-11-28 08:29:53.738543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.590 qpair failed and we were unable to recover it. 00:30:56.590 [2024-11-28 08:29:53.738909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.590 [2024-11-28 08:29:53.738936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.590 qpair failed and we were unable to recover it. 00:30:56.590 [2024-11-28 08:29:53.739308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.590 [2024-11-28 08:29:53.739338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.590 qpair failed and we were unable to recover it. 00:30:56.590 [2024-11-28 08:29:53.739599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.590 [2024-11-28 08:29:53.739628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.590 qpair failed and we were unable to recover it. 00:30:56.590 [2024-11-28 08:29:53.740055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.590 [2024-11-28 08:29:53.740084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.590 qpair failed and we were unable to recover it. 00:30:56.590 [2024-11-28 08:29:53.740421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.590 [2024-11-28 08:29:53.740451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.590 qpair failed and we were unable to recover it. 00:30:56.590 [2024-11-28 08:29:53.740823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.590 [2024-11-28 08:29:53.740851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.590 qpair failed and we were unable to recover it. 00:30:56.590 [2024-11-28 08:29:53.741213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.590 [2024-11-28 08:29:53.741243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.590 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.741621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.741650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.742016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.742046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.742391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.742421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.742670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.742698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.743078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.743107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.743512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.743542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.743865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.743895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.744117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.744149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.744551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.744583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.744941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.744970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.745333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.745363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.745733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.745763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.746122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.746150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.746491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.746520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.746887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.746916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.747174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.747207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.747554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.747583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.747954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.747982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.748343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.748372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.748761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.748790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.749147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.749185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.749519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.749549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.749904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.749933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.750296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.750326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.750695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.750729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.751092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.751121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.751510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.751541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.751914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.751943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.752300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.752331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.752719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.752747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.753096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.753126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.753489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.753519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.753880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.753909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.754323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.754353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.754691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.754721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.755095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.755123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.755484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.755514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.591 qpair failed and we were unable to recover it. 00:30:56.591 [2024-11-28 08:29:53.755876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.591 [2024-11-28 08:29:53.755904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.756274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.756304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.756608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.756638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.757008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.757037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.757387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.757417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.757775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.757803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.758089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.758118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.758482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.758513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.758859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.758888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.759259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.759289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.759669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.759698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.760062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.760091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.760500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.760530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.760891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.760919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.761294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.761330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.761698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.761726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.762079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.762109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.762491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.762520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.762879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.762908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.763271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.763301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.763670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.763700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.764054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.764083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.764467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.764497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.764836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.764866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.765107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.765138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.765539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.765569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.765924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.765952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.766315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.766344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.766521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.766554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.766885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.766914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.767278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.767308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.767677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.767706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.768075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.768103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.768357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.768386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.768683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.768711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.769073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.769102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.769458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.769488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.769849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.592 [2024-11-28 08:29:53.769878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.592 qpair failed and we were unable to recover it. 00:30:56.592 [2024-11-28 08:29:53.770119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.770150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.770407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.770436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.770881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.770910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.771270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.771302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.771669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.771698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.772046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.772075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.772419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.772448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.772812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.772840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.773202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.773231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.773591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.773619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.773985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.774014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.774355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.774384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.774764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.774792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.775153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.775206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.775614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.775643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.776024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.776053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.776418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.776455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.776854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.776884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.777246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.777275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.777655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.777683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.778049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.778079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.778412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.778442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.778799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.778829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.779190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.779219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.779578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.779606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.779969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.779997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.780380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.780411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.780742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.780771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.781135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.781171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.781528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.781555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.781907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.781936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.782201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.782230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.782583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.782610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.782979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.783008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.783423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.783453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.783812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.783841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.784207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.784237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.784595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.593 [2024-11-28 08:29:53.784623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.593 qpair failed and we were unable to recover it. 00:30:56.593 [2024-11-28 08:29:53.784979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.785009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.785271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.785301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.785653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.785688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.786045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.786073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.786411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.786441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.786806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.786835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.787090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.787123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.787500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.787530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.787886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.787915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.788279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.788308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.788677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.788705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.789067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.789096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.789451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.789480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.789857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.789885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.790233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.790263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.790634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.790663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.791027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.791055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.791408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.791438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.791779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.791815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.792170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.792200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.792573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.792603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.792965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.792993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.793264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.793294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.793701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.793729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.794087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.794116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.794487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.794519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.794877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.794905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.795349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.795378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.795735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.795764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.796122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.796152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.796524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.796552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.796923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.796950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.797321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.797352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.797722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.797751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.798117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.798147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.798497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.798527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.798886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.798915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.594 [2024-11-28 08:29:53.799277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.594 [2024-11-28 08:29:53.799307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.594 qpair failed and we were unable to recover it. 00:30:56.595 [2024-11-28 08:29:53.799670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.595 [2024-11-28 08:29:53.799698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.595 qpair failed and we were unable to recover it. 00:30:56.595 [2024-11-28 08:29:53.800062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.595 [2024-11-28 08:29:53.800091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.595 qpair failed and we were unable to recover it. 00:30:56.595 [2024-11-28 08:29:53.800479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.595 [2024-11-28 08:29:53.800509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.595 qpair failed and we were unable to recover it. 00:30:56.595 [2024-11-28 08:29:53.800868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.595 [2024-11-28 08:29:53.800897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.595 qpair failed and we were unable to recover it. 00:30:56.595 [2024-11-28 08:29:53.801284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.595 [2024-11-28 08:29:53.801314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.595 qpair failed and we were unable to recover it. 00:30:56.595 [2024-11-28 08:29:53.801659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.595 [2024-11-28 08:29:53.801689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.595 qpair failed and we were unable to recover it. 00:30:56.595 [2024-11-28 08:29:53.802059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.595 [2024-11-28 08:29:53.802087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.595 qpair failed and we were unable to recover it. 00:30:56.595 [2024-11-28 08:29:53.802466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.595 [2024-11-28 08:29:53.802496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.595 qpair failed and we were unable to recover it. 00:30:56.595 [2024-11-28 08:29:53.802867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.595 [2024-11-28 08:29:53.802897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.595 qpair failed and we were unable to recover it. 00:30:56.595 [2024-11-28 08:29:53.803255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.595 [2024-11-28 08:29:53.803285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.595 qpair failed and we were unable to recover it. 00:30:56.595 [2024-11-28 08:29:53.803536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.595 [2024-11-28 08:29:53.803563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.595 qpair failed and we were unable to recover it. 00:30:56.595 [2024-11-28 08:29:53.803921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.595 [2024-11-28 08:29:53.803949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.595 qpair failed and we were unable to recover it. 00:30:56.595 [2024-11-28 08:29:53.804321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.595 [2024-11-28 08:29:53.804350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.595 qpair failed and we were unable to recover it. 00:30:56.595 [2024-11-28 08:29:53.804712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.595 [2024-11-28 08:29:53.804741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.595 qpair failed and we were unable to recover it. 00:30:56.595 [2024-11-28 08:29:53.805119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.595 [2024-11-28 08:29:53.805148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.595 qpair failed and we were unable to recover it. 00:30:56.595 [2024-11-28 08:29:53.805512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.595 [2024-11-28 08:29:53.805542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.595 qpair failed and we were unable to recover it. 00:30:56.595 [2024-11-28 08:29:53.805911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.595 [2024-11-28 08:29:53.805941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.595 qpair failed and we were unable to recover it. 00:30:56.595 [2024-11-28 08:29:53.806318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.595 [2024-11-28 08:29:53.806348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.595 qpair failed and we were unable to recover it. 00:30:56.595 [2024-11-28 08:29:53.806693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.595 [2024-11-28 08:29:53.806721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.595 qpair failed and we were unable to recover it. 00:30:56.595 [2024-11-28 08:29:53.807085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.595 [2024-11-28 08:29:53.807114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.595 qpair failed and we were unable to recover it. 00:30:56.595 [2024-11-28 08:29:53.807473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.595 [2024-11-28 08:29:53.807509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.595 qpair failed and we were unable to recover it. 00:30:56.595 [2024-11-28 08:29:53.807863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.595 [2024-11-28 08:29:53.807891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.595 qpair failed and we were unable to recover it. 00:30:56.595 [2024-11-28 08:29:53.808257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.595 [2024-11-28 08:29:53.808286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.595 qpair failed and we were unable to recover it. 00:30:56.595 [2024-11-28 08:29:53.808670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.595 [2024-11-28 08:29:53.808698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.595 qpair failed and we were unable to recover it. 00:30:56.595 [2024-11-28 08:29:53.809062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.595 [2024-11-28 08:29:53.809090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.595 qpair failed and we were unable to recover it. 00:30:56.595 [2024-11-28 08:29:53.809446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.595 [2024-11-28 08:29:53.809476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.595 qpair failed and we were unable to recover it. 00:30:56.595 [2024-11-28 08:29:53.809837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.595 [2024-11-28 08:29:53.809866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.595 qpair failed and we were unable to recover it. 00:30:56.595 [2024-11-28 08:29:53.810212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.595 [2024-11-28 08:29:53.810243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.595 qpair failed and we were unable to recover it. 00:30:56.595 [2024-11-28 08:29:53.810630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.595 [2024-11-28 08:29:53.810659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.595 qpair failed and we were unable to recover it. 00:30:56.595 [2024-11-28 08:29:53.811027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.595 [2024-11-28 08:29:53.811056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.595 qpair failed and we were unable to recover it. 00:30:56.595 [2024-11-28 08:29:53.811423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.595 [2024-11-28 08:29:53.811454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.595 qpair failed and we were unable to recover it. 00:30:56.595 [2024-11-28 08:29:53.811818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.595 [2024-11-28 08:29:53.811849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.595 qpair failed and we were unable to recover it. 00:30:56.595 [2024-11-28 08:29:53.812279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.595 [2024-11-28 08:29:53.812308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.812655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.812685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.813049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.813077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.813396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.813426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.813796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.813824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.814189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.814220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.814592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.814622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.814950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.814978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.815320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.815351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.815720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.815748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.816118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.816147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.816489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.816520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.816886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.816914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.817176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.817205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.817571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.817601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.817956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.817985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.818343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.818374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.818715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.818746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.818990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.819023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.819428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.819458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.819816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.819853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.820207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.820236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.820596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.820625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.820998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.821027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.821384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.821414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.821779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.821808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.822173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.822203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.822545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.822574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.822945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.822981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.823376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.823407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.823761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.823789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.824157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.824194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.824538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.824567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.824927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.824956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.825333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.825363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.825731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.825758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.826138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.826186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.826535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.596 [2024-11-28 08:29:53.826564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.596 qpair failed and we were unable to recover it. 00:30:56.596 [2024-11-28 08:29:53.826926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.826954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.827324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.827356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.827605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.827636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.827990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.828019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.828387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.828417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.828780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.828808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.829178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.829208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.829577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.829605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.829969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.829998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.830311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.830340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.830711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.830740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.831101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.831129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.831510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.831541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.831902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.831931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.832297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.832327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.832695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.832732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.832981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.833013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.833387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.833418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.833788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.833817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.834194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.834224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.834593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.834623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.834991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.835019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.835384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.835414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.835780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.835808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.836153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.836193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.836531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.836562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.836817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.836846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.837203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.837232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.837641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.837670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.838044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.838073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.838476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.838511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.838877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.838906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.839281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.839311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.839677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.839705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.840069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.840097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.840461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.840490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.840894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.840924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.841291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.597 [2024-11-28 08:29:53.841321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.597 qpair failed and we were unable to recover it. 00:30:56.597 [2024-11-28 08:29:53.841694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.598 [2024-11-28 08:29:53.841722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.598 qpair failed and we were unable to recover it. 00:30:56.598 [2024-11-28 08:29:53.842096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.598 [2024-11-28 08:29:53.842125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.598 qpair failed and we were unable to recover it. 00:30:56.598 [2024-11-28 08:29:53.842484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.598 [2024-11-28 08:29:53.842513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.598 qpair failed and we were unable to recover it. 00:30:56.598 [2024-11-28 08:29:53.842870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.598 [2024-11-28 08:29:53.842898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.598 qpair failed and we were unable to recover it. 00:30:56.598 [2024-11-28 08:29:53.843258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.598 [2024-11-28 08:29:53.843288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.598 qpair failed and we were unable to recover it. 00:30:56.598 [2024-11-28 08:29:53.843670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.598 [2024-11-28 08:29:53.843699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.598 qpair failed and we were unable to recover it. 00:30:56.598 [2024-11-28 08:29:53.844045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.598 [2024-11-28 08:29:53.844076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.598 qpair failed and we were unable to recover it. 00:30:56.598 [2024-11-28 08:29:53.844418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.598 [2024-11-28 08:29:53.844447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.598 qpair failed and we were unable to recover it. 00:30:56.598 [2024-11-28 08:29:53.844807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.598 [2024-11-28 08:29:53.844835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.598 qpair failed and we were unable to recover it. 00:30:56.598 [2024-11-28 08:29:53.845201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.598 [2024-11-28 08:29:53.845231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.598 qpair failed and we were unable to recover it. 00:30:56.598 [2024-11-28 08:29:53.845583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.598 [2024-11-28 08:29:53.845611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.598 qpair failed and we were unable to recover it. 00:30:56.598 [2024-11-28 08:29:53.845977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.598 [2024-11-28 08:29:53.846006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.598 qpair failed and we were unable to recover it. 00:30:56.598 [2024-11-28 08:29:53.846391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.598 [2024-11-28 08:29:53.846421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.598 qpair failed and we were unable to recover it. 00:30:56.598 [2024-11-28 08:29:53.846775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.598 [2024-11-28 08:29:53.846803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.598 qpair failed and we were unable to recover it. 00:30:56.877 [2024-11-28 08:29:53.847070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.877 [2024-11-28 08:29:53.847100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.877 qpair failed and we were unable to recover it. 00:30:56.877 [2024-11-28 08:29:53.847485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.877 [2024-11-28 08:29:53.847515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.877 qpair failed and we were unable to recover it. 00:30:56.877 [2024-11-28 08:29:53.847875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.877 [2024-11-28 08:29:53.847904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.877 qpair failed and we were unable to recover it. 00:30:56.877 [2024-11-28 08:29:53.848269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.877 [2024-11-28 08:29:53.848299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.877 qpair failed and we were unable to recover it. 00:30:56.877 [2024-11-28 08:29:53.848668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.877 [2024-11-28 08:29:53.848697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.877 qpair failed and we were unable to recover it. 00:30:56.877 [2024-11-28 08:29:53.848996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.877 [2024-11-28 08:29:53.849024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.877 qpair failed and we were unable to recover it. 00:30:56.877 [2024-11-28 08:29:53.849387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.877 [2024-11-28 08:29:53.849417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.877 qpair failed and we were unable to recover it. 00:30:56.877 [2024-11-28 08:29:53.849801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.877 [2024-11-28 08:29:53.849832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.877 qpair failed and we were unable to recover it. 00:30:56.877 [2024-11-28 08:29:53.850183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.877 [2024-11-28 08:29:53.850213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.850578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.878 [2024-11-28 08:29:53.850607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.850969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.878 [2024-11-28 08:29:53.850996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.851350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.878 [2024-11-28 08:29:53.851379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.851744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.878 [2024-11-28 08:29:53.851772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.852142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.878 [2024-11-28 08:29:53.852200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.852575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.878 [2024-11-28 08:29:53.852603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.852950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.878 [2024-11-28 08:29:53.852979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.853347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.878 [2024-11-28 08:29:53.853377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.853736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.878 [2024-11-28 08:29:53.853765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.854130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.878 [2024-11-28 08:29:53.854173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.854508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.878 [2024-11-28 08:29:53.854537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.854875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.878 [2024-11-28 08:29:53.854904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.855350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.878 [2024-11-28 08:29:53.855380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.855735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.878 [2024-11-28 08:29:53.855763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.856138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.878 [2024-11-28 08:29:53.856185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.856579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.878 [2024-11-28 08:29:53.856607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.856860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.878 [2024-11-28 08:29:53.856889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.857264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.878 [2024-11-28 08:29:53.857294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.857656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.878 [2024-11-28 08:29:53.857684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.858045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.878 [2024-11-28 08:29:53.858073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.858490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.878 [2024-11-28 08:29:53.858520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.858849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.878 [2024-11-28 08:29:53.858878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.859241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.878 [2024-11-28 08:29:53.859270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.859629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.878 [2024-11-28 08:29:53.859659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.859897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.878 [2024-11-28 08:29:53.859929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.860262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.878 [2024-11-28 08:29:53.860293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.860656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.878 [2024-11-28 08:29:53.860684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.861069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.878 [2024-11-28 08:29:53.861099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.861464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.878 [2024-11-28 08:29:53.861493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.861851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.878 [2024-11-28 08:29:53.861879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.862256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.878 [2024-11-28 08:29:53.862285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.862644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.878 [2024-11-28 08:29:53.862673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.863039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.878 [2024-11-28 08:29:53.863068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.863352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.878 [2024-11-28 08:29:53.863381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.863759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.878 [2024-11-28 08:29:53.863788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.864181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.878 [2024-11-28 08:29:53.864212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.878 qpair failed and we were unable to recover it. 00:30:56.878 [2024-11-28 08:29:53.864675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.864705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.865065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.865103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.865482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.865511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.865874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.865904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.866276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.866306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.866554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.866586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.866954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.866983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.867253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.867282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.867554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.867582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.867947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.867977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.868317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.868346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.868711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.868740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.869094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.869124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.869523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.869559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.869907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.869935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.870303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.870334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.870700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.870729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.871089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.871117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.871461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.871490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.871857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.871885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.872239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.872271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.872607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.872637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.873005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.873034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.873401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.873431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.873809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.873837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.874202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.874232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.874672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.874700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.875058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.875088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.875435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.875465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.875835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.875863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.876231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.876260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.876617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.876646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.877010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.877039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.877411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.877442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.877786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.877814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.878177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.878207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.878573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.878603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.878975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.879 [2024-11-28 08:29:53.879004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.879 qpair failed and we were unable to recover it. 00:30:56.879 [2024-11-28 08:29:53.879427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.879457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.879696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.879728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.880121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.880150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.880533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.880562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.880908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.880937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.881318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.881348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.881723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.881752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.882110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.882137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.882357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.882387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.882732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.882762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.883101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.883129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.883506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.883536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.883894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.883924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.884291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.884321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.884682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.884711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.885072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.885109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.885507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.885537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.885915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.885944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.886310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.886340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.886708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.886736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.886984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.887013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.887391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.887421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.887688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.887716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.887922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.887951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.888190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.888219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.888613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.888643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.888947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.888977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.889233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.889281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.889551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.889579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.889945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.889974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.890322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.890354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.890715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.890744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.891078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.891106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.891417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.891448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.891813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.891842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.892185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.892217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.892598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.892626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.880 [2024-11-28 08:29:53.892991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.880 [2024-11-28 08:29:53.893022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.880 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.893347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.893376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.893752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.893781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.894144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.894182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.894518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.894547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.894953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.894982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.895312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.895344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.895598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.895627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.895996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.896024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.896269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.896298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.896572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.896601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.896960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.896991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.897358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.897389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.897739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.897768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.898024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.898057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.898404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.898435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.898800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.898829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.899198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.899228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.899494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.899528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.899936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.899967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.900338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.900368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.900739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.900767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.901105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.901132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.901491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.901520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.901889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.901917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.902290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.902320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.902570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.902603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.902948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.902978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.903358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.903387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.903752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.903781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.904151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.904192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.904580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.904610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.904951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.904982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.905338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.905369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.905736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.905764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.906134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.906176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.906577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.906606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.906996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.907024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.881 qpair failed and we were unable to recover it. 00:30:56.881 [2024-11-28 08:29:53.907391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.881 [2024-11-28 08:29:53.907421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.882 qpair failed and we were unable to recover it. 00:30:56.882 [2024-11-28 08:29:53.907774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.882 [2024-11-28 08:29:53.907802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.882 qpair failed and we were unable to recover it. 00:30:56.882 [2024-11-28 08:29:53.908177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.882 [2024-11-28 08:29:53.908207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.882 qpair failed and we were unable to recover it. 00:30:56.882 [2024-11-28 08:29:53.908572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.882 [2024-11-28 08:29:53.908601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.882 qpair failed and we were unable to recover it. 00:30:56.882 [2024-11-28 08:29:53.908983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.882 [2024-11-28 08:29:53.909012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.882 qpair failed and we were unable to recover it. 00:30:56.882 [2024-11-28 08:29:53.909389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.882 [2024-11-28 08:29:53.909419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.882 qpair failed and we were unable to recover it. 00:30:56.882 [2024-11-28 08:29:53.909780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.882 [2024-11-28 08:29:53.909809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.882 qpair failed and we were unable to recover it. 00:30:56.882 [2024-11-28 08:29:53.910195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.882 [2024-11-28 08:29:53.910227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.882 qpair failed and we were unable to recover it. 00:30:56.882 [2024-11-28 08:29:53.910599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.882 [2024-11-28 08:29:53.910629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.882 qpair failed and we were unable to recover it. 00:30:56.882 [2024-11-28 08:29:53.910992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.882 [2024-11-28 08:29:53.911022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.882 qpair failed and we were unable to recover it. 00:30:56.882 [2024-11-28 08:29:53.911386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.882 [2024-11-28 08:29:53.911417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.882 qpair failed and we were unable to recover it. 00:30:56.882 [2024-11-28 08:29:53.911762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.882 [2024-11-28 08:29:53.911790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.882 qpair failed and we were unable to recover it. 00:30:56.882 [2024-11-28 08:29:53.912041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.882 [2024-11-28 08:29:53.912069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.882 qpair failed and we were unable to recover it. 00:30:56.882 [2024-11-28 08:29:53.912454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.882 [2024-11-28 08:29:53.912484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.882 qpair failed and we were unable to recover it. 00:30:56.882 [2024-11-28 08:29:53.912848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.882 [2024-11-28 08:29:53.912878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.882 qpair failed and we were unable to recover it. 00:30:56.882 [2024-11-28 08:29:53.913148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.882 [2024-11-28 08:29:53.913187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.882 qpair failed and we were unable to recover it. 00:30:56.882 [2024-11-28 08:29:53.913578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.882 [2024-11-28 08:29:53.913608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.882 qpair failed and we were unable to recover it. 00:30:56.882 [2024-11-28 08:29:53.913994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.882 [2024-11-28 08:29:53.914023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.882 qpair failed and we were unable to recover it. 00:30:56.882 [2024-11-28 08:29:53.914390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.882 [2024-11-28 08:29:53.914420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.882 qpair failed and we were unable to recover it. 00:30:56.882 [2024-11-28 08:29:53.914552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.882 [2024-11-28 08:29:53.914583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.882 qpair failed and we were unable to recover it. 00:30:56.882 [2024-11-28 08:29:53.914982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.882 [2024-11-28 08:29:53.915018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.882 qpair failed and we were unable to recover it. 00:30:56.882 [2024-11-28 08:29:53.915271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.882 [2024-11-28 08:29:53.915301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.882 qpair failed and we were unable to recover it. 00:30:56.882 [2024-11-28 08:29:53.915697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.882 [2024-11-28 08:29:53.915726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.882 qpair failed and we were unable to recover it. 00:30:56.882 [2024-11-28 08:29:53.915933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.882 [2024-11-28 08:29:53.915962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.882 qpair failed and we were unable to recover it. 00:30:56.882 [2024-11-28 08:29:53.916336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.882 [2024-11-28 08:29:53.916365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.882 qpair failed and we were unable to recover it. 00:30:56.882 [2024-11-28 08:29:53.916739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.882 [2024-11-28 08:29:53.916767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.882 qpair failed and we were unable to recover it. 00:30:56.882 [2024-11-28 08:29:53.917124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.882 [2024-11-28 08:29:53.917154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.882 qpair failed and we were unable to recover it. 00:30:56.882 [2024-11-28 08:29:53.917426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.882 [2024-11-28 08:29:53.917455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.882 qpair failed and we were unable to recover it. 00:30:56.882 [2024-11-28 08:29:53.917807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.882 [2024-11-28 08:29:53.917839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.882 qpair failed and we were unable to recover it. 00:30:56.882 [2024-11-28 08:29:53.918209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.882 [2024-11-28 08:29:53.918239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.882 qpair failed and we were unable to recover it. 00:30:56.882 [2024-11-28 08:29:53.918605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.882 [2024-11-28 08:29:53.918634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.882 qpair failed and we were unable to recover it. 00:30:56.882 [2024-11-28 08:29:53.918913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.882 [2024-11-28 08:29:53.918942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.882 qpair failed and we were unable to recover it. 00:30:56.882 [2024-11-28 08:29:53.919328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.882 [2024-11-28 08:29:53.919358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.919616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.919645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.920051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.920084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.920450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.920480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.920844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.920873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.921225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.921255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.921667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.921697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.922073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.922104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.922516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.922546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.922904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.922934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.923296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.923327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.923768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.923797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.924132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.924172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.924539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.924569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.924949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.924977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.925346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.925376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.925751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.925783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.926148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.926187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.926553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.926583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.926935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.926965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.927340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.927371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.927719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.927748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.928019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.928048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.928427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.928457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.928792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.928823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.929197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.929228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.929599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.929628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.929993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.930022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.930142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.930188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.930547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.930575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.930958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.930988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.931387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.931418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.931792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.931821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.932187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.932216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.932596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.932625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.933001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.933031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.933404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.933435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.933707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.883 [2024-11-28 08:29:53.933736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.883 qpair failed and we were unable to recover it. 00:30:56.883 [2024-11-28 08:29:53.934094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.934124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.934471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.934500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.934869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.934898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.935250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.935282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.935429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.935459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.935834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.935863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.936066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.936094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.936468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.936501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.936853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.936883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.937260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.937290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.937551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.937580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.937939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.937969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.938218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.938248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.938619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.938648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.938962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.938991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.939388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.939420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.939787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.939816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.940196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.940227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.940411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.940440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.940805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.940835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.941198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.941229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.941568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.941598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.941973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.942003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.942364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.942394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.942763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.942792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.943176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.943207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.943522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.943552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.943769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.943798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.944176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.944207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.944466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.944494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.944901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.944936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.945286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.945320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.945468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.945498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.945740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.945773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.946144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.946183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.946433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.946461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.946819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.884 [2024-11-28 08:29:53.946850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.884 qpair failed and we were unable to recover it. 00:30:56.884 [2024-11-28 08:29:53.947092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.947122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.947534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.947564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.947924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.947952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.948321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.948352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.948742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.948771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.949146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.949184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.949530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.949560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.950005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.950035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.950400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.950432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.950810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.950839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.951196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.951227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.951618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.951648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.951866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.951896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.952276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.952306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.952681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.952711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.953053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.953084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.953220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.953251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.953503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.953532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.953804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.953833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.954084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.954114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.954490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.954527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.954926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.954956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.955296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.955329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.955735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.955765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.956136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.956178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.956515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.956545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.956674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.956702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.957092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.957120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.957506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.957536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.957900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.957933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.958303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.958333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.958677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.958706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.959078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.959106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.959519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.959550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.959915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.959946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.960321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.960352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.960586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.960616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.960961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.885 [2024-11-28 08:29:53.960991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.885 qpair failed and we were unable to recover it. 00:30:56.885 [2024-11-28 08:29:53.961258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.886 [2024-11-28 08:29:53.961288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.886 qpair failed and we were unable to recover it. 00:30:56.886 [2024-11-28 08:29:53.961666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.886 [2024-11-28 08:29:53.961696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.886 qpair failed and we were unable to recover it. 00:30:56.886 [2024-11-28 08:29:53.961932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.886 [2024-11-28 08:29:53.961964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.886 qpair failed and we were unable to recover it. 00:30:56.886 [2024-11-28 08:29:53.962329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.886 [2024-11-28 08:29:53.962360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.886 qpair failed and we were unable to recover it. 00:30:56.886 [2024-11-28 08:29:53.962726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.886 [2024-11-28 08:29:53.962755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.886 qpair failed and we were unable to recover it. 00:30:56.886 [2024-11-28 08:29:53.962867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.886 [2024-11-28 08:29:53.962895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.886 qpair failed and we were unable to recover it. 00:30:56.886 [2024-11-28 08:29:53.963239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.886 [2024-11-28 08:29:53.963269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.886 qpair failed and we were unable to recover it. 00:30:56.886 [2024-11-28 08:29:53.963516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.886 [2024-11-28 08:29:53.963545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.886 qpair failed and we were unable to recover it. 00:30:56.886 [2024-11-28 08:29:53.963880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.886 [2024-11-28 08:29:53.963909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.886 qpair failed and we were unable to recover it. 00:30:56.886 [2024-11-28 08:29:53.964182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.886 [2024-11-28 08:29:53.964214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.886 qpair failed and we were unable to recover it. 00:30:56.886 [2024-11-28 08:29:53.964614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.886 [2024-11-28 08:29:53.964645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.886 qpair failed and we were unable to recover it. 00:30:56.886 [2024-11-28 08:29:53.965006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.886 [2024-11-28 08:29:53.965034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.886 qpair failed and we were unable to recover it. 00:30:56.886 [2024-11-28 08:29:53.965368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.886 [2024-11-28 08:29:53.965398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.886 qpair failed and we were unable to recover it. 00:30:56.886 [2024-11-28 08:29:53.965778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.886 [2024-11-28 08:29:53.965808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.886 qpair failed and we were unable to recover it. 00:30:56.886 [2024-11-28 08:29:53.966157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.886 [2024-11-28 08:29:53.966194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.886 qpair failed and we were unable to recover it. 00:30:56.886 [2024-11-28 08:29:53.966575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.886 [2024-11-28 08:29:53.966603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.886 qpair failed and we were unable to recover it. 00:30:56.886 [2024-11-28 08:29:53.966849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.886 [2024-11-28 08:29:53.966878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.886 qpair failed and we were unable to recover it. 00:30:56.886 [2024-11-28 08:29:53.967254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.886 [2024-11-28 08:29:53.967284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.886 qpair failed and we were unable to recover it. 00:30:56.886 [2024-11-28 08:29:53.967659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.886 [2024-11-28 08:29:53.967687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.886 qpair failed and we were unable to recover it. 00:30:56.886 [2024-11-28 08:29:53.967868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.886 [2024-11-28 08:29:53.967897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.886 qpair failed and we were unable to recover it. 00:30:56.886 [2024-11-28 08:29:53.968229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.886 [2024-11-28 08:29:53.968260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.886 qpair failed and we were unable to recover it. 00:30:56.886 [2024-11-28 08:29:53.968640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.886 [2024-11-28 08:29:53.968668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.886 qpair failed and we were unable to recover it. 00:30:56.886 [2024-11-28 08:29:53.969039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.886 [2024-11-28 08:29:53.969076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.886 qpair failed and we were unable to recover it. 00:30:56.886 [2024-11-28 08:29:53.969324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.886 [2024-11-28 08:29:53.969355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.886 qpair failed and we were unable to recover it. 00:30:56.886 [2024-11-28 08:29:53.969646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.886 [2024-11-28 08:29:53.969675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.886 qpair failed and we were unable to recover it. 00:30:56.886 [2024-11-28 08:29:53.969926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.886 [2024-11-28 08:29:53.969956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.886 qpair failed and we were unable to recover it. 00:30:56.886 [2024-11-28 08:29:53.970330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.886 [2024-11-28 08:29:53.970359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.886 qpair failed and we were unable to recover it. 00:30:56.886 [2024-11-28 08:29:53.970742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.886 [2024-11-28 08:29:53.970771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.886 qpair failed and we were unable to recover it. 00:30:56.886 [2024-11-28 08:29:53.971219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.886 [2024-11-28 08:29:53.971249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.886 qpair failed and we were unable to recover it. 00:30:56.886 [2024-11-28 08:29:53.971620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.886 [2024-11-28 08:29:53.971650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.886 qpair failed and we were unable to recover it. 00:30:56.886 [2024-11-28 08:29:53.972025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.886 [2024-11-28 08:29:53.972054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.886 qpair failed and we were unable to recover it. 00:30:56.886 [2024-11-28 08:29:53.972278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.886 [2024-11-28 08:29:53.972307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.886 qpair failed and we were unable to recover it. 00:30:56.886 [2024-11-28 08:29:53.972692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.886 [2024-11-28 08:29:53.972720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.886 qpair failed and we were unable to recover it. 00:30:56.886 [2024-11-28 08:29:53.972955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.886 [2024-11-28 08:29:53.972982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.886 qpair failed and we were unable to recover it. 00:30:56.886 [2024-11-28 08:29:53.973271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.886 [2024-11-28 08:29:53.973301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.887 [2024-11-28 08:29:53.973605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.887 [2024-11-28 08:29:53.973645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.887 [2024-11-28 08:29:53.973986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.887 [2024-11-28 08:29:53.974015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.887 [2024-11-28 08:29:53.974395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.887 [2024-11-28 08:29:53.974426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.887 [2024-11-28 08:29:53.974779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.887 [2024-11-28 08:29:53.974808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.887 [2024-11-28 08:29:53.975192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.887 [2024-11-28 08:29:53.975221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.887 [2024-11-28 08:29:53.975580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.887 [2024-11-28 08:29:53.975609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.887 [2024-11-28 08:29:53.975967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.887 [2024-11-28 08:29:53.975998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.887 [2024-11-28 08:29:53.976358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.887 [2024-11-28 08:29:53.976387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.887 [2024-11-28 08:29:53.976745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.887 [2024-11-28 08:29:53.976774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.887 [2024-11-28 08:29:53.977137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.887 [2024-11-28 08:29:53.977185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.887 [2024-11-28 08:29:53.977539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.887 [2024-11-28 08:29:53.977568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.887 [2024-11-28 08:29:53.977944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.887 [2024-11-28 08:29:53.977973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.887 [2024-11-28 08:29:53.978332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.887 [2024-11-28 08:29:53.978363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.887 [2024-11-28 08:29:53.978736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.887 [2024-11-28 08:29:53.978766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.887 [2024-11-28 08:29:53.979141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.887 [2024-11-28 08:29:53.979179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.887 [2024-11-28 08:29:53.979409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.887 [2024-11-28 08:29:53.979442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.887 [2024-11-28 08:29:53.979822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.887 [2024-11-28 08:29:53.979850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.887 [2024-11-28 08:29:53.980206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.887 [2024-11-28 08:29:53.980236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.887 [2024-11-28 08:29:53.980617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.887 [2024-11-28 08:29:53.980647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.887 [2024-11-28 08:29:53.980819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.887 [2024-11-28 08:29:53.980847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.887 [2024-11-28 08:29:53.981224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.887 [2024-11-28 08:29:53.981255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.887 [2024-11-28 08:29:53.981640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.887 [2024-11-28 08:29:53.981669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.887 [2024-11-28 08:29:53.982049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.887 [2024-11-28 08:29:53.982077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.887 [2024-11-28 08:29:53.982444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.887 [2024-11-28 08:29:53.982474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.887 [2024-11-28 08:29:53.982839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.887 [2024-11-28 08:29:53.982869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.887 [2024-11-28 08:29:53.983232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.887 [2024-11-28 08:29:53.983261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.887 [2024-11-28 08:29:53.983650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.887 [2024-11-28 08:29:53.983679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.887 [2024-11-28 08:29:53.984049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.887 [2024-11-28 08:29:53.984084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.887 [2024-11-28 08:29:53.984453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.887 [2024-11-28 08:29:53.984483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.887 [2024-11-28 08:29:53.984844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.887 [2024-11-28 08:29:53.984872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.887 [2024-11-28 08:29:53.985220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.887 [2024-11-28 08:29:53.985250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.887 [2024-11-28 08:29:53.985586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.887 [2024-11-28 08:29:53.985615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.887 [2024-11-28 08:29:53.985976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.887 [2024-11-28 08:29:53.986006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.887 [2024-11-28 08:29:53.986375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.887 [2024-11-28 08:29:53.986406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.887 [2024-11-28 08:29:53.986767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.887 [2024-11-28 08:29:53.986795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.887 [2024-11-28 08:29:53.987033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.887 [2024-11-28 08:29:53.987062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.887 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:53.987472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:53.987502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:53.987880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:53.987910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:53.988251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:53.988280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:53.988545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:53.988572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:53.988947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:53.988975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:53.989318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:53.989348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:53.989706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:53.989735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:53.990097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:53.990126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:53.990549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:53.990580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:53.990964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:53.990992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:53.991359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:53.991390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:53.991768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:53.991797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:53.992113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:53.992142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:53.992537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:53.992566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:53.992880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:53.992910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:53.993302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:53.993331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:53.993709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:53.993737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:53.994102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:53.994132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:53.994511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:53.994541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:53.994889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:53.994918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:53.995278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:53.995307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:53.995658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:53.995686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:53.996045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:53.996075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:53.996418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:53.996448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:53.996815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:53.996844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:53.997194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:53.997224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:53.997580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:53.997609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:53.997978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:53.998007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:53.998271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:53.998300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:53.998670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:53.998699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:53.999062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:53.999091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:53.999345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:53.999385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:53.999766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:53.999798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:54.000156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:54.000195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:54.000554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:54.000593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:54.000929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:54.000958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.888 [2024-11-28 08:29:54.001318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.888 [2024-11-28 08:29:54.001350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.888 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.001584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.001615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.001982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.002012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.002392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.002422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.002761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.002789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.003157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.003195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.003537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.003567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.003926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.003955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.004306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.004337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.004700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.004732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.005101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.005130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.005486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.005516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.005895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.005930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.006296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.006327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.006667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.006696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.007142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.007180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.007534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.007564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.007946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.007975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.008320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.008352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.008640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.008668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.009011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.009039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.009401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.009432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.009795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.009825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.010200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.010231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.010489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.010521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.010890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.010921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.011251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.011282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.011641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.011670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.012009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.012038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.012401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.012432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.012782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.012810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.013180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.013211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.013556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.013586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.013950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.013979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.014344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.014375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.014753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.014789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.015157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.015195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.015540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.015570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.889 qpair failed and we were unable to recover it. 00:30:56.889 [2024-11-28 08:29:54.015915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.889 [2024-11-28 08:29:54.015945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.016307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.016337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.016702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.016730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.017092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.017121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.017499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.017528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.017890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.017918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.018295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.018325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.018758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.018788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.019030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.019060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.019439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.019468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.019826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.019855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.020217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.020248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.020615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.020642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.021010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.021038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.021384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.021414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.021655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.021687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.022049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.022079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.022414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.022443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.022807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.022835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.023209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.023239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.023617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.023646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.024016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.024044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.024389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.024419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.024789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.024818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.025186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.025216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.025556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.025594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.025960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.025989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.026344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.026373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.026737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.026765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.027130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.027167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.027521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.027549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.027809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.027838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.028189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.028220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.028585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.028613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.028974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.029002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.029349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.029378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.029745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.029773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.030184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.030220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.890 qpair failed and we were unable to recover it. 00:30:56.890 [2024-11-28 08:29:54.030592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.890 [2024-11-28 08:29:54.030620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.891 qpair failed and we were unable to recover it. 00:30:56.891 [2024-11-28 08:29:54.030976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.891 [2024-11-28 08:29:54.031005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.891 qpair failed and we were unable to recover it. 00:30:56.891 [2024-11-28 08:29:54.031258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.891 [2024-11-28 08:29:54.031288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.891 qpair failed and we were unable to recover it. 00:30:56.891 [2024-11-28 08:29:54.031659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.891 [2024-11-28 08:29:54.031688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.891 qpair failed and we were unable to recover it. 00:30:56.891 [2024-11-28 08:29:54.032121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.891 [2024-11-28 08:29:54.032150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.891 qpair failed and we were unable to recover it. 00:30:56.891 [2024-11-28 08:29:54.032511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.891 [2024-11-28 08:29:54.032540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.891 qpair failed and we were unable to recover it. 00:30:56.891 [2024-11-28 08:29:54.032898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.891 [2024-11-28 08:29:54.032926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.891 qpair failed and we were unable to recover it. 00:30:56.891 [2024-11-28 08:29:54.033280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.891 [2024-11-28 08:29:54.033310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.891 qpair failed and we were unable to recover it. 00:30:56.891 [2024-11-28 08:29:54.033686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.891 [2024-11-28 08:29:54.033715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.891 qpair failed and we were unable to recover it. 00:30:56.891 [2024-11-28 08:29:54.034081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.891 [2024-11-28 08:29:54.034110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.891 qpair failed and we were unable to recover it. 00:30:56.891 [2024-11-28 08:29:54.034549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.891 [2024-11-28 08:29:54.034579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.891 qpair failed and we were unable to recover it. 00:30:56.891 [2024-11-28 08:29:54.034947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.891 [2024-11-28 08:29:54.034975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.891 qpair failed and we were unable to recover it. 00:30:56.891 [2024-11-28 08:29:54.035334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.891 [2024-11-28 08:29:54.035365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.891 qpair failed and we were unable to recover it. 00:30:56.891 [2024-11-28 08:29:54.035720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.891 [2024-11-28 08:29:54.035748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.891 qpair failed and we were unable to recover it. 00:30:56.891 [2024-11-28 08:29:54.035979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.891 [2024-11-28 08:29:54.036007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.891 qpair failed and we were unable to recover it. 00:30:56.891 [2024-11-28 08:29:54.036370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.891 [2024-11-28 08:29:54.036401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.891 qpair failed and we were unable to recover it. 00:30:56.891 [2024-11-28 08:29:54.036771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.891 [2024-11-28 08:29:54.036800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.891 qpair failed and we were unable to recover it. 00:30:56.891 [2024-11-28 08:29:54.037149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.891 [2024-11-28 08:29:54.037194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.891 qpair failed and we were unable to recover it. 00:30:56.891 [2024-11-28 08:29:54.037522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.891 [2024-11-28 08:29:54.037551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.891 qpair failed and we were unable to recover it. 00:30:56.891 [2024-11-28 08:29:54.037930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.891 [2024-11-28 08:29:54.037958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.891 qpair failed and we were unable to recover it. 00:30:56.891 [2024-11-28 08:29:54.038176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.891 [2024-11-28 08:29:54.038209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.891 qpair failed and we were unable to recover it. 00:30:56.891 [2024-11-28 08:29:54.038629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.891 [2024-11-28 08:29:54.038659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.891 qpair failed and we were unable to recover it. 00:30:56.891 [2024-11-28 08:29:54.039040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.891 [2024-11-28 08:29:54.039069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.891 qpair failed and we were unable to recover it. 00:30:56.891 [2024-11-28 08:29:54.039407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.891 [2024-11-28 08:29:54.039438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.891 qpair failed and we were unable to recover it. 00:30:56.891 [2024-11-28 08:29:54.039808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.891 [2024-11-28 08:29:54.039836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.891 qpair failed and we were unable to recover it. 00:30:56.891 [2024-11-28 08:29:54.040192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.891 [2024-11-28 08:29:54.040222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.891 qpair failed and we were unable to recover it. 00:30:56.891 [2024-11-28 08:29:54.040580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.891 [2024-11-28 08:29:54.040609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.891 qpair failed and we were unable to recover it. 00:30:56.891 [2024-11-28 08:29:54.040973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.891 [2024-11-28 08:29:54.041002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.891 qpair failed and we were unable to recover it. 00:30:56.891 [2024-11-28 08:29:54.041343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.891 [2024-11-28 08:29:54.041372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.891 qpair failed and we were unable to recover it. 00:30:56.891 [2024-11-28 08:29:54.041779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.891 [2024-11-28 08:29:54.041807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.891 qpair failed and we were unable to recover it. 00:30:56.891 [2024-11-28 08:29:54.042058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.891 [2024-11-28 08:29:54.042087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.891 qpair failed and we were unable to recover it. 00:30:56.891 [2024-11-28 08:29:54.042478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.891 [2024-11-28 08:29:54.042507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.891 qpair failed and we were unable to recover it. 00:30:56.891 [2024-11-28 08:29:54.042870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.891 [2024-11-28 08:29:54.042900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.043257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.043289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.043649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.043678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.044016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.044046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.044415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.044444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.044814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.044844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.045215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.045245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.045592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.045630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.045982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.046011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.046378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.046409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.046774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.046802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.047202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.047232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.047590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.047619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.047967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.047997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.048383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.048413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.048772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.048801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.049171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.049201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.049486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.049514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.049889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.049919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.050284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.050313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.050728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.050757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.051113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.051143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.051517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.051548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.051915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.051944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.052315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.052346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.052704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.052732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.053094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.053123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.053502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.053532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.053899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.053928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.054321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.054351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.054717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.054746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.055097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.055126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.055508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.055539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.055773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.055804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.056169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.056200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.056522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.056560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.056918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.056947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.057318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.892 [2024-11-28 08:29:54.057349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.892 qpair failed and we were unable to recover it. 00:30:56.892 [2024-11-28 08:29:54.057614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.893 [2024-11-28 08:29:54.057643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.893 qpair failed and we were unable to recover it. 00:30:56.893 [2024-11-28 08:29:54.058035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.893 [2024-11-28 08:29:54.058065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.893 qpair failed and we were unable to recover it. 00:30:56.893 [2024-11-28 08:29:54.058434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.893 [2024-11-28 08:29:54.058463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.893 qpair failed and we were unable to recover it. 00:30:56.893 [2024-11-28 08:29:54.058835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.893 [2024-11-28 08:29:54.058864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.893 qpair failed and we were unable to recover it. 00:30:56.893 [2024-11-28 08:29:54.059245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.893 [2024-11-28 08:29:54.059275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.893 qpair failed and we were unable to recover it. 00:30:56.893 [2024-11-28 08:29:54.059680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.893 [2024-11-28 08:29:54.059709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.893 qpair failed and we were unable to recover it. 00:30:56.893 [2024-11-28 08:29:54.060089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.893 [2024-11-28 08:29:54.060118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.893 qpair failed and we were unable to recover it. 00:30:56.893 [2024-11-28 08:29:54.060482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.893 [2024-11-28 08:29:54.060511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.893 qpair failed and we were unable to recover it. 00:30:56.893 [2024-11-28 08:29:54.060877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.893 [2024-11-28 08:29:54.060905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.893 qpair failed and we were unable to recover it. 00:30:56.893 [2024-11-28 08:29:54.061275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.893 [2024-11-28 08:29:54.061313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.893 qpair failed and we were unable to recover it. 00:30:56.893 [2024-11-28 08:29:54.061674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.893 [2024-11-28 08:29:54.061709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.893 qpair failed and we were unable to recover it. 00:30:56.893 [2024-11-28 08:29:54.062069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.893 [2024-11-28 08:29:54.062099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.893 qpair failed and we were unable to recover it. 00:30:56.893 [2024-11-28 08:29:54.062475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.893 [2024-11-28 08:29:54.062505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.893 qpair failed and we were unable to recover it. 00:30:56.893 [2024-11-28 08:29:54.062867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.893 [2024-11-28 08:29:54.062896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.893 qpair failed and we were unable to recover it. 00:30:56.893 [2024-11-28 08:29:54.063269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.893 [2024-11-28 08:29:54.063299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.893 qpair failed and we were unable to recover it. 00:30:56.893 [2024-11-28 08:29:54.063647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.893 [2024-11-28 08:29:54.063676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.893 qpair failed and we were unable to recover it. 00:30:56.893 [2024-11-28 08:29:54.064037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.893 [2024-11-28 08:29:54.064066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.893 qpair failed and we were unable to recover it. 00:30:56.893 [2024-11-28 08:29:54.064412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.893 [2024-11-28 08:29:54.064443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.893 qpair failed and we were unable to recover it. 00:30:56.893 [2024-11-28 08:29:54.064796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.893 [2024-11-28 08:29:54.064825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.893 qpair failed and we were unable to recover it. 00:30:56.893 [2024-11-28 08:29:54.065138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.893 [2024-11-28 08:29:54.065178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.893 qpair failed and we were unable to recover it. 00:30:56.893 [2024-11-28 08:29:54.065553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.893 [2024-11-28 08:29:54.065584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.893 qpair failed and we were unable to recover it. 00:30:56.893 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2161126 Killed "${NVMF_APP[@]}" "$@" 00:30:56.893 [2024-11-28 08:29:54.065914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.893 [2024-11-28 08:29:54.065944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.893 qpair failed and we were unable to recover it. 00:30:56.893 [2024-11-28 08:29:54.066411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.893 [2024-11-28 08:29:54.066447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.893 qpair failed and we were unable to recover it. 00:30:56.893 08:29:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:56.893 [2024-11-28 08:29:54.066790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.893 [2024-11-28 08:29:54.066820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.893 qpair failed and we were unable to recover it. 00:30:56.893 08:29:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:56.893 [2024-11-28 08:29:54.067260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.893 [2024-11-28 08:29:54.067291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.893 qpair failed and we were unable to recover it. 00:30:56.893 08:29:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:56.893 [2024-11-28 08:29:54.067636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.893 [2024-11-28 08:29:54.067667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.893 qpair failed and we were unable to recover it. 00:30:56.893 08:29:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:56.893 08:29:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:56.893 [2024-11-28 08:29:54.068031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.893 [2024-11-28 08:29:54.068062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.893 qpair failed and we were unable to recover it. 00:30:56.893 [2024-11-28 08:29:54.068313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.893 [2024-11-28 08:29:54.068346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.893 qpair failed and we were unable to recover it. 00:30:56.893 [2024-11-28 08:29:54.068719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.893 [2024-11-28 08:29:54.068748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.893 qpair failed and we were unable to recover it. 00:30:56.893 [2024-11-28 08:29:54.068924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.893 [2024-11-28 08:29:54.068952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.893 qpair failed and we were unable to recover it. 00:30:56.893 [2024-11-28 08:29:54.069309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.893 [2024-11-28 08:29:54.069339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.893 qpair failed and we were unable to recover it. 00:30:56.893 [2024-11-28 08:29:54.069692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.893 [2024-11-28 08:29:54.069723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.893 qpair failed and we were unable to recover it. 00:30:56.893 [2024-11-28 08:29:54.070084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.893 [2024-11-28 08:29:54.070114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.893 qpair failed and we were unable to recover it. 00:30:56.893 [2024-11-28 08:29:54.070506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.893 [2024-11-28 08:29:54.070545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.893 qpair failed and we were unable to recover it. 00:30:56.893 [2024-11-28 08:29:54.070984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.893 [2024-11-28 08:29:54.071014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.893 qpair failed and we were unable to recover it. 00:30:56.893 [2024-11-28 08:29:54.071353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.893 [2024-11-28 08:29:54.071384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.893 qpair failed and we were unable to recover it. 00:30:56.894 [2024-11-28 08:29:54.071742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.894 [2024-11-28 08:29:54.071771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.894 qpair failed and we were unable to recover it. 00:30:56.894 [2024-11-28 08:29:54.072136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.894 [2024-11-28 08:29:54.072171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.894 qpair failed and we were unable to recover it. 00:30:56.894 [2024-11-28 08:29:54.072508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.894 [2024-11-28 08:29:54.072537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.894 qpair failed and we were unable to recover it. 00:30:56.894 [2024-11-28 08:29:54.072900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.894 [2024-11-28 08:29:54.072928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.894 qpair failed and we were unable to recover it. 00:30:56.894 [2024-11-28 08:29:54.073384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.894 [2024-11-28 08:29:54.073415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.894 qpair failed and we were unable to recover it. 00:30:56.894 [2024-11-28 08:29:54.073833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.894 [2024-11-28 08:29:54.073867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.894 qpair failed and we were unable to recover it. 00:30:56.894 [2024-11-28 08:29:54.074348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.894 [2024-11-28 08:29:54.074378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.894 qpair failed and we were unable to recover it. 00:30:56.894 [2024-11-28 08:29:54.074754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.894 [2024-11-28 08:29:54.074782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.894 qpair failed and we were unable to recover it. 00:30:56.894 [2024-11-28 08:29:54.075147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.894 [2024-11-28 08:29:54.075183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.894 qpair failed and we were unable to recover it. 00:30:56.894 [2024-11-28 08:29:54.075594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.894 [2024-11-28 08:29:54.075625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.894 qpair failed and we were unable to recover it. 00:30:56.894 [2024-11-28 08:29:54.075988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.894 [2024-11-28 08:29:54.076018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.894 qpair failed and we were unable to recover it. 00:30:56.894 [2024-11-28 08:29:54.076386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.894 [2024-11-28 08:29:54.076417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.894 qpair failed and we were unable to recover it. 00:30:56.894 08:29:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2162037 00:30:56.894 [2024-11-28 08:29:54.076761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.894 [2024-11-28 08:29:54.076792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.894 qpair failed and we were unable to recover it. 00:30:56.894 08:29:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2162037 00:30:56.894 [2024-11-28 08:29:54.077175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.894 [2024-11-28 08:29:54.077205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.894 qpair failed and we were unable to recover it. 00:30:56.894 08:29:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:56.894 08:29:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2162037 ']' 00:30:56.894 [2024-11-28 08:29:54.077564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.894 [2024-11-28 08:29:54.077595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b9 08:29:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:56.894 0 with addr=10.0.0.2, port=4420 00:30:56.894 qpair failed and we were unable to recover it. 00:30:56.894 08:29:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:56.894 [2024-11-28 08:29:54.077999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.894 [2024-11-28 08:29:54.078028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.894 qpair failed and we were unable to recover it. 00:30:56.894 08:29:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:56.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:56.894 08:29:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:56.894 [2024-11-28 08:29:54.078388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.894 [2024-11-28 08:29:54.078418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.894 qpair failed and we were unable to recover it. 00:30:56.894 08:29:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:56.894 [2024-11-28 08:29:54.078788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.894 [2024-11-28 08:29:54.078817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.894 qpair failed and we were unable to recover it. 00:30:56.894 [2024-11-28 08:29:54.079189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.894 [2024-11-28 08:29:54.079220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.894 qpair failed and we were unable to recover it. 00:30:56.894 [2024-11-28 08:29:54.079580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.894 [2024-11-28 08:29:54.079615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.894 qpair failed and we were unable to recover it. 00:30:56.894 [2024-11-28 08:29:54.079961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.894 [2024-11-28 08:29:54.079991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.894 qpair failed and we were unable to recover it. 00:30:56.894 [2024-11-28 08:29:54.080343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.894 [2024-11-28 08:29:54.080373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.894 qpair failed and we were unable to recover it. 00:30:56.894 [2024-11-28 08:29:54.080734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.894 [2024-11-28 08:29:54.080764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.894 qpair failed and we were unable to recover it. 00:30:56.894 [2024-11-28 08:29:54.081129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.894 [2024-11-28 08:29:54.081182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.894 qpair failed and we were unable to recover it. 00:30:56.894 [2024-11-28 08:29:54.081527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.894 [2024-11-28 08:29:54.081556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.894 qpair failed and we were unable to recover it. 00:30:56.894 [2024-11-28 08:29:54.081936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.894 [2024-11-28 08:29:54.081976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.894 qpair failed and we were unable to recover it. 00:30:56.894 [2024-11-28 08:29:54.082320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.894 [2024-11-28 08:29:54.082351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.894 qpair failed and we were unable to recover it. 00:30:56.894 [2024-11-28 08:29:54.082721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.894 [2024-11-28 08:29:54.082751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.894 qpair failed and we were unable to recover it. 00:30:56.894 [2024-11-28 08:29:54.082995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.894 [2024-11-28 08:29:54.083023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.894 qpair failed and we were unable to recover it. 00:30:56.894 [2024-11-28 08:29:54.083291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.894 [2024-11-28 08:29:54.083323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.894 qpair failed and we were unable to recover it. 00:30:56.894 [2024-11-28 08:29:54.083667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.894 [2024-11-28 08:29:54.083696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.894 qpair failed and we were unable to recover it. 00:30:56.894 [2024-11-28 08:29:54.083987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.894 [2024-11-28 08:29:54.084018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.894 qpair failed and we were unable to recover it. 00:30:56.894 [2024-11-28 08:29:54.084375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.894 [2024-11-28 08:29:54.084409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.894 qpair failed and we were unable to recover it. 00:30:56.894 [2024-11-28 08:29:54.084776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.894 [2024-11-28 08:29:54.084805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.085173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.085203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.085550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.085580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.085943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.085972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.086340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.086371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.086768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.086800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.087139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.087181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.087533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.087563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.087803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.087835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.088190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.088221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.088595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.088623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.088987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.089016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.089387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.089418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.089661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.089692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.090077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.090106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.090368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.090398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.090762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.090796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.091135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.091172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.091537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.091567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.091818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.091849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.092223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.092254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.092626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.092659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.092902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.092931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.093284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.093316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.093717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.093746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.094008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.094039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.094417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.094454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.094792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.094823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.095195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.095226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.095582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.095612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.095877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.095907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.096141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.096183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.096413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.096446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.096828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.096858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.097114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.097142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.097552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.097588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.097969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.098002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.098356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.098388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.895 qpair failed and we were unable to recover it. 00:30:56.895 [2024-11-28 08:29:54.098782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.895 [2024-11-28 08:29:54.098811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.896 qpair failed and we were unable to recover it. 00:30:56.896 [2024-11-28 08:29:54.099147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.896 [2024-11-28 08:29:54.099188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.896 qpair failed and we were unable to recover it. 00:30:56.896 [2024-11-28 08:29:54.099559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.896 [2024-11-28 08:29:54.099588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.896 qpair failed and we were unable to recover it. 00:30:56.896 [2024-11-28 08:29:54.099954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.896 [2024-11-28 08:29:54.099983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.896 qpair failed and we were unable to recover it. 00:30:56.896 [2024-11-28 08:29:54.100259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.896 [2024-11-28 08:29:54.100289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.896 qpair failed and we were unable to recover it. 00:30:56.896 [2024-11-28 08:29:54.100665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.896 [2024-11-28 08:29:54.100695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.896 qpair failed and we were unable to recover it. 00:30:56.896 [2024-11-28 08:29:54.101044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.896 [2024-11-28 08:29:54.101074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.896 qpair failed and we were unable to recover it. 00:30:56.896 [2024-11-28 08:29:54.101465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.896 [2024-11-28 08:29:54.101502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.896 qpair failed and we were unable to recover it. 00:30:56.896 [2024-11-28 08:29:54.101944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.896 [2024-11-28 08:29:54.101974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.896 qpair failed and we were unable to recover it. 00:30:56.896 [2024-11-28 08:29:54.102379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.896 [2024-11-28 08:29:54.102409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.896 qpair failed and we were unable to recover it. 00:30:56.896 [2024-11-28 08:29:54.102810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.896 [2024-11-28 08:29:54.102848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.896 qpair failed and we were unable to recover it. 00:30:56.896 [2024-11-28 08:29:54.103218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.896 [2024-11-28 08:29:54.103249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.896 qpair failed and we were unable to recover it. 00:30:56.896 [2024-11-28 08:29:54.103484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.896 [2024-11-28 08:29:54.103514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.896 qpair failed and we were unable to recover it. 00:30:56.896 [2024-11-28 08:29:54.103901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.896 [2024-11-28 08:29:54.103929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.896 qpair failed and we were unable to recover it. 00:30:56.896 [2024-11-28 08:29:54.104281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.896 [2024-11-28 08:29:54.104312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.896 qpair failed and we were unable to recover it. 00:30:56.896 [2024-11-28 08:29:54.104675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.896 [2024-11-28 08:29:54.104705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.896 qpair failed and we were unable to recover it. 00:30:56.896 [2024-11-28 08:29:54.105076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.896 [2024-11-28 08:29:54.105106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.896 qpair failed and we were unable to recover it. 00:30:56.896 [2024-11-28 08:29:54.105487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.896 [2024-11-28 08:29:54.105517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.896 qpair failed and we were unable to recover it. 00:30:56.896 [2024-11-28 08:29:54.105894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.896 [2024-11-28 08:29:54.105924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.896 qpair failed and we were unable to recover it. 00:30:56.896 [2024-11-28 08:29:54.106295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.896 [2024-11-28 08:29:54.106326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.896 qpair failed and we were unable to recover it. 00:30:56.896 [2024-11-28 08:29:54.106675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.896 [2024-11-28 08:29:54.106705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.896 qpair failed and we were unable to recover it. 00:30:56.896 [2024-11-28 08:29:54.107089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.896 [2024-11-28 08:29:54.107119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.896 qpair failed and we were unable to recover it. 00:30:56.896 [2024-11-28 08:29:54.107503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.896 [2024-11-28 08:29:54.107532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.896 qpair failed and we were unable to recover it. 00:30:56.896 [2024-11-28 08:29:54.107990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.896 [2024-11-28 08:29:54.108018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.896 qpair failed and we were unable to recover it. 00:30:56.896 [2024-11-28 08:29:54.108401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.896 [2024-11-28 08:29:54.108431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.896 qpair failed and we were unable to recover it. 00:30:56.896 [2024-11-28 08:29:54.108785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.896 [2024-11-28 08:29:54.108815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.896 qpair failed and we were unable to recover it. 00:30:56.896 [2024-11-28 08:29:54.109206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.896 [2024-11-28 08:29:54.109235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.896 qpair failed and we were unable to recover it. 00:30:56.896 [2024-11-28 08:29:54.109703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.896 [2024-11-28 08:29:54.109732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.896 qpair failed and we were unable to recover it. 00:30:56.896 [2024-11-28 08:29:54.110077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.896 [2024-11-28 08:29:54.110112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.896 qpair failed and we were unable to recover it. 00:30:56.896 [2024-11-28 08:29:54.110483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.896 [2024-11-28 08:29:54.110512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.896 qpair failed and we were unable to recover it. 00:30:56.896 [2024-11-28 08:29:54.110862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.896 [2024-11-28 08:29:54.110890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.896 qpair failed and we were unable to recover it. 00:30:56.896 [2024-11-28 08:29:54.111266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.896 [2024-11-28 08:29:54.111297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.896 qpair failed and we were unable to recover it. 00:30:56.896 [2024-11-28 08:29:54.111661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.896 [2024-11-28 08:29:54.111689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.896 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.112071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.112100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.112473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.112503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.112878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.112907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.113298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.113328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.113674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.113705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.113997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.114027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.114377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.114409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.114763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.114794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.115174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.115206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.115515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.115544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.115916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.115945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.116346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.116378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.116625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.116653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.117037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.117066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.117432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.117462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.117725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.117754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.118139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.118177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.118524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.118553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.118790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.118818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.119193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.119224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.119589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.119617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.119988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.120016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.120387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.120419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.120795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.120824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.121198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.121227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.121600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.121633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.122009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.122038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.122400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.122430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.122742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.122773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.123138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.123179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.123532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.123563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.123933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.123962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.124321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.124350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.124728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.124756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.125133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.125186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.125600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.125636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.125981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.126012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.897 qpair failed and we were unable to recover it. 00:30:56.897 [2024-11-28 08:29:54.126399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.897 [2024-11-28 08:29:54.126430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.126815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.126843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.127219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.127248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.127630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.127658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.128036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.128064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.128423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.128454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.128825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.128854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.129230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.129260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.129633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.129666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.130035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.130064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.130426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.130456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.130828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.130865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.131239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.131268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.131652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.131681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.132043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.132071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.132455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.132484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.132848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.132878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.133244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.133275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.133625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.133655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.134029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.134058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.134414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.134444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.134702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.134731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.135105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.135135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.135517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.135546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.135917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.135947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.136332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.136364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.136631] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:30:56.898 [2024-11-28 08:29:54.136709] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:56.898 [2024-11-28 08:29:54.136752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.136783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.137152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.137190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.137596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.137628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.138008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.138039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.138417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.138448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.138731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.138759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.139131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.139171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.139356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.139386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.139647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.139681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.140047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.140079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.140525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.140559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.898 qpair failed and we were unable to recover it. 00:30:56.898 [2024-11-28 08:29:54.140784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.898 [2024-11-28 08:29:54.140823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.899 qpair failed and we were unable to recover it. 00:30:56.899 [2024-11-28 08:29:54.141194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.899 [2024-11-28 08:29:54.141227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.899 qpair failed and we were unable to recover it. 00:30:56.899 [2024-11-28 08:29:54.141625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.899 [2024-11-28 08:29:54.141656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.899 qpair failed and we were unable to recover it. 00:30:56.899 [2024-11-28 08:29:54.142034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.899 [2024-11-28 08:29:54.142064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.899 qpair failed and we were unable to recover it. 00:30:56.899 [2024-11-28 08:29:54.142458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.899 [2024-11-28 08:29:54.142488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.899 qpair failed and we were unable to recover it. 00:30:56.899 [2024-11-28 08:29:54.142746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.899 [2024-11-28 08:29:54.142779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.899 qpair failed and we were unable to recover it. 00:30:56.899 [2024-11-28 08:29:54.143142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.899 [2024-11-28 08:29:54.143183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.899 qpair failed and we were unable to recover it. 00:30:56.899 [2024-11-28 08:29:54.143531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.899 [2024-11-28 08:29:54.143561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.899 qpair failed and we were unable to recover it. 00:30:56.899 [2024-11-28 08:29:54.143944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.899 [2024-11-28 08:29:54.143973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.899 qpair failed and we were unable to recover it. 00:30:56.899 [2024-11-28 08:29:54.144348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.899 [2024-11-28 08:29:54.144381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.899 qpair failed and we were unable to recover it. 00:30:56.899 [2024-11-28 08:29:54.144763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.899 [2024-11-28 08:29:54.144793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.899 qpair failed and we were unable to recover it. 00:30:56.899 [2024-11-28 08:29:54.145169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.899 [2024-11-28 08:29:54.145200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.899 qpair failed and we were unable to recover it. 00:30:56.899 [2024-11-28 08:29:54.145474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.899 [2024-11-28 08:29:54.145505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.899 qpair failed and we were unable to recover it. 00:30:56.899 [2024-11-28 08:29:54.145801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.899 [2024-11-28 08:29:54.145831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.899 qpair failed and we were unable to recover it. 00:30:56.899 [2024-11-28 08:29:54.146203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.899 [2024-11-28 08:29:54.146234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.899 qpair failed and we were unable to recover it. 00:30:56.899 [2024-11-28 08:29:54.146507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.899 [2024-11-28 08:29:54.146537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.899 qpair failed and we were unable to recover it. 00:30:56.899 [2024-11-28 08:29:54.146899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.899 [2024-11-28 08:29:54.146930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.899 qpair failed and we were unable to recover it. 00:30:56.899 [2024-11-28 08:29:54.147283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.899 [2024-11-28 08:29:54.147315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:56.899 qpair failed and we were unable to recover it. 00:30:57.174 [2024-11-28 08:29:54.147692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.174 [2024-11-28 08:29:54.147727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.174 qpair failed and we were unable to recover it. 00:30:57.174 [2024-11-28 08:29:54.148095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.174 [2024-11-28 08:29:54.148125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.174 qpair failed and we were unable to recover it. 00:30:57.174 [2024-11-28 08:29:54.148482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.174 [2024-11-28 08:29:54.148514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.174 qpair failed and we were unable to recover it. 00:30:57.174 [2024-11-28 08:29:54.148808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.174 [2024-11-28 08:29:54.148839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.174 qpair failed and we were unable to recover it. 00:30:57.174 [2024-11-28 08:29:54.149199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.174 [2024-11-28 08:29:54.149232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.174 qpair failed and we were unable to recover it. 00:30:57.174 [2024-11-28 08:29:54.149664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.174 [2024-11-28 08:29:54.149694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.174 qpair failed and we were unable to recover it. 00:30:57.174 [2024-11-28 08:29:54.150050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.174 [2024-11-28 08:29:54.150081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.174 qpair failed and we were unable to recover it. 00:30:57.174 [2024-11-28 08:29:54.150457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.174 [2024-11-28 08:29:54.150490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.174 qpair failed and we were unable to recover it. 00:30:57.174 [2024-11-28 08:29:54.150869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.174 [2024-11-28 08:29:54.150900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.174 qpair failed and we were unable to recover it. 00:30:57.174 [2024-11-28 08:29:54.151072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.174 [2024-11-28 08:29:54.151108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.174 qpair failed and we were unable to recover it. 00:30:57.174 [2024-11-28 08:29:54.151512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.174 [2024-11-28 08:29:54.151548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.174 qpair failed and we were unable to recover it. 00:30:57.174 [2024-11-28 08:29:54.151917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.174 [2024-11-28 08:29:54.151948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.174 qpair failed and we were unable to recover it. 00:30:57.174 [2024-11-28 08:29:54.152302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.174 [2024-11-28 08:29:54.152335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.174 qpair failed and we were unable to recover it. 00:30:57.174 [2024-11-28 08:29:54.152516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.174 [2024-11-28 08:29:54.152549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.174 qpair failed and we were unable to recover it. 00:30:57.174 [2024-11-28 08:29:54.152847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.174 [2024-11-28 08:29:54.152879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.174 qpair failed and we were unable to recover it. 00:30:57.174 [2024-11-28 08:29:54.153132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.174 [2024-11-28 08:29:54.153187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.174 qpair failed and we were unable to recover it. 00:30:57.174 [2024-11-28 08:29:54.153564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.174 [2024-11-28 08:29:54.153594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.174 qpair failed and we were unable to recover it. 00:30:57.175 [2024-11-28 08:29:54.153988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.175 [2024-11-28 08:29:54.154018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.175 qpair failed and we were unable to recover it. 00:30:57.175 [2024-11-28 08:29:54.154360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.175 [2024-11-28 08:29:54.154390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.175 qpair failed and we were unable to recover it. 00:30:57.175 [2024-11-28 08:29:54.154759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.175 [2024-11-28 08:29:54.154788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.175 qpair failed and we were unable to recover it. 00:30:57.175 [2024-11-28 08:29:54.155062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.175 [2024-11-28 08:29:54.155091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.175 qpair failed and we were unable to recover it. 00:30:57.175 [2024-11-28 08:29:54.155325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.175 [2024-11-28 08:29:54.155355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.175 qpair failed and we were unable to recover it. 00:30:57.175 [2024-11-28 08:29:54.155715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.175 [2024-11-28 08:29:54.155752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.175 qpair failed and we were unable to recover it. 00:30:57.175 [2024-11-28 08:29:54.156018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.175 [2024-11-28 08:29:54.156048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.175 qpair failed and we were unable to recover it. 00:30:57.175 [2024-11-28 08:29:54.156359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.175 [2024-11-28 08:29:54.156388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.175 qpair failed and we were unable to recover it. 00:30:57.175 [2024-11-28 08:29:54.156761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.175 [2024-11-28 08:29:54.156792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.175 qpair failed and we were unable to recover it. 00:30:57.175 [2024-11-28 08:29:54.157224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.175 [2024-11-28 08:29:54.157256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.175 qpair failed and we were unable to recover it. 00:30:57.175 [2024-11-28 08:29:54.157625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.175 [2024-11-28 08:29:54.157655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.175 qpair failed and we were unable to recover it. 00:30:57.175 [2024-11-28 08:29:54.158035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.175 [2024-11-28 08:29:54.158064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.175 qpair failed and we were unable to recover it. 00:30:57.175 [2024-11-28 08:29:54.158441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.175 [2024-11-28 08:29:54.158472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.175 qpair failed and we were unable to recover it. 00:30:57.175 [2024-11-28 08:29:54.158851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.175 [2024-11-28 08:29:54.158881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.175 qpair failed and we were unable to recover it. 00:30:57.175 [2024-11-28 08:29:54.159265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.175 [2024-11-28 08:29:54.159296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.175 qpair failed and we were unable to recover it. 00:30:57.175 [2024-11-28 08:29:54.159679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.175 [2024-11-28 08:29:54.159709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.175 qpair failed and we were unable to recover it. 00:30:57.175 [2024-11-28 08:29:54.160187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.175 [2024-11-28 08:29:54.160218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.175 qpair failed and we were unable to recover it. 00:30:57.175 [2024-11-28 08:29:54.160490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.175 [2024-11-28 08:29:54.160519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.175 qpair failed and we were unable to recover it. 00:30:57.175 [2024-11-28 08:29:54.160763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.175 [2024-11-28 08:29:54.160793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.175 qpair failed and we were unable to recover it. 00:30:57.175 [2024-11-28 08:29:54.161059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.175 [2024-11-28 08:29:54.161088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.175 qpair failed and we were unable to recover it. 00:30:57.175 [2024-11-28 08:29:54.161450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.175 [2024-11-28 08:29:54.161481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.175 qpair failed and we were unable to recover it. 00:30:57.175 [2024-11-28 08:29:54.161878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.175 [2024-11-28 08:29:54.161908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.175 qpair failed and we were unable to recover it. 00:30:57.175 [2024-11-28 08:29:54.162181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.175 [2024-11-28 08:29:54.162212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.175 qpair failed and we were unable to recover it. 00:30:57.175 [2024-11-28 08:29:54.162665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.175 [2024-11-28 08:29:54.162696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.175 qpair failed and we were unable to recover it. 00:30:57.175 [2024-11-28 08:29:54.163068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.175 [2024-11-28 08:29:54.163098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.175 qpair failed and we were unable to recover it. 00:30:57.175 [2024-11-28 08:29:54.163483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.175 [2024-11-28 08:29:54.163514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.175 qpair failed and we were unable to recover it. 00:30:57.175 [2024-11-28 08:29:54.163891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.175 [2024-11-28 08:29:54.163921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.175 qpair failed and we were unable to recover it. 00:30:57.175 [2024-11-28 08:29:54.164290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.175 [2024-11-28 08:29:54.164323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.175 qpair failed and we were unable to recover it. 00:30:57.175 [2024-11-28 08:29:54.164727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.175 [2024-11-28 08:29:54.164757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.175 qpair failed and we were unable to recover it. 00:30:57.175 [2024-11-28 08:29:54.165139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.175 [2024-11-28 08:29:54.165182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.175 qpair failed and we were unable to recover it. 00:30:57.175 [2024-11-28 08:29:54.165460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.175 [2024-11-28 08:29:54.165490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.175 qpair failed and we were unable to recover it. 00:30:57.175 [2024-11-28 08:29:54.165736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.175 [2024-11-28 08:29:54.165765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.175 qpair failed and we were unable to recover it. 00:30:57.175 [2024-11-28 08:29:54.166061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.175 [2024-11-28 08:29:54.166091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.175 qpair failed and we were unable to recover it. 00:30:57.175 [2024-11-28 08:29:54.166335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.176 [2024-11-28 08:29:54.166365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.176 qpair failed and we were unable to recover it. 00:30:57.176 [2024-11-28 08:29:54.166707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.176 [2024-11-28 08:29:54.166736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.176 qpair failed and we were unable to recover it. 00:30:57.176 [2024-11-28 08:29:54.166985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.176 [2024-11-28 08:29:54.167013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.176 qpair failed and we were unable to recover it. 00:30:57.176 [2024-11-28 08:29:54.167395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.176 [2024-11-28 08:29:54.167426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.176 qpair failed and we were unable to recover it. 00:30:57.176 [2024-11-28 08:29:54.167807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.176 [2024-11-28 08:29:54.167836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.176 qpair failed and we were unable to recover it. 00:30:57.176 [2024-11-28 08:29:54.168208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.176 [2024-11-28 08:29:54.168239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.176 qpair failed and we were unable to recover it. 00:30:57.176 [2024-11-28 08:29:54.168632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.176 [2024-11-28 08:29:54.168662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.176 qpair failed and we were unable to recover it. 00:30:57.176 [2024-11-28 08:29:54.168927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.176 [2024-11-28 08:29:54.168956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.176 qpair failed and we were unable to recover it. 00:30:57.176 [2024-11-28 08:29:54.169362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.176 [2024-11-28 08:29:54.169394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.176 qpair failed and we were unable to recover it. 00:30:57.176 [2024-11-28 08:29:54.169761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.176 [2024-11-28 08:29:54.169791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.176 qpair failed and we were unable to recover it. 00:30:57.176 [2024-11-28 08:29:54.170165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.176 [2024-11-28 08:29:54.170196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.176 qpair failed and we were unable to recover it. 00:30:57.176 [2024-11-28 08:29:54.170569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.176 [2024-11-28 08:29:54.170600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.176 qpair failed and we were unable to recover it. 00:30:57.176 [2024-11-28 08:29:54.170862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.176 [2024-11-28 08:29:54.170898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.176 qpair failed and we were unable to recover it. 00:30:57.176 [2024-11-28 08:29:54.171139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.176 [2024-11-28 08:29:54.171177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.176 qpair failed and we were unable to recover it. 00:30:57.176 [2024-11-28 08:29:54.171553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.176 [2024-11-28 08:29:54.171582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.176 qpair failed and we were unable to recover it. 00:30:57.176 [2024-11-28 08:29:54.171964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.176 [2024-11-28 08:29:54.171995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.176 qpair failed and we were unable to recover it. 00:30:57.176 [2024-11-28 08:29:54.172348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.176 [2024-11-28 08:29:54.172378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.176 qpair failed and we were unable to recover it. 00:30:57.176 [2024-11-28 08:29:54.172761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.176 [2024-11-28 08:29:54.172791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.176 qpair failed and we were unable to recover it. 00:30:57.176 [2024-11-28 08:29:54.173057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.176 [2024-11-28 08:29:54.173087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.176 qpair failed and we were unable to recover it. 00:30:57.176 [2024-11-28 08:29:54.173468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.176 [2024-11-28 08:29:54.173499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.176 qpair failed and we were unable to recover it. 00:30:57.176 [2024-11-28 08:29:54.173718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.176 [2024-11-28 08:29:54.173747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.176 qpair failed and we were unable to recover it. 00:30:57.176 [2024-11-28 08:29:54.173879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.176 [2024-11-28 08:29:54.173909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.176 qpair failed and we were unable to recover it. 00:30:57.176 [2024-11-28 08:29:54.174294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.176 [2024-11-28 08:29:54.174325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.176 qpair failed and we were unable to recover it. 00:30:57.176 [2024-11-28 08:29:54.174603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.176 [2024-11-28 08:29:54.174632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.176 qpair failed and we were unable to recover it. 00:30:57.176 [2024-11-28 08:29:54.174999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.176 [2024-11-28 08:29:54.175029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.176 qpair failed and we were unable to recover it. 00:30:57.176 [2024-11-28 08:29:54.175407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.176 [2024-11-28 08:29:54.175436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.176 qpair failed and we were unable to recover it. 00:30:57.176 [2024-11-28 08:29:54.175831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.176 [2024-11-28 08:29:54.175861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.176 qpair failed and we were unable to recover it. 00:30:57.176 [2024-11-28 08:29:54.176124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.176 [2024-11-28 08:29:54.176157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.176 qpair failed and we were unable to recover it. 00:30:57.176 [2024-11-28 08:29:54.176590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.176 [2024-11-28 08:29:54.176619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.176 qpair failed and we were unable to recover it. 00:30:57.176 [2024-11-28 08:29:54.176872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.176 [2024-11-28 08:29:54.176902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.176 qpair failed and we were unable to recover it. 00:30:57.176 [2024-11-28 08:29:54.177243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.176 [2024-11-28 08:29:54.177275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.176 qpair failed and we were unable to recover it. 00:30:57.176 [2024-11-28 08:29:54.177694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.176 [2024-11-28 08:29:54.177723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.176 qpair failed and we were unable to recover it. 00:30:57.177 [2024-11-28 08:29:54.178074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.177 [2024-11-28 08:29:54.178109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.177 qpair failed and we were unable to recover it. 00:30:57.177 [2024-11-28 08:29:54.178371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.177 [2024-11-28 08:29:54.178403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.177 qpair failed and we were unable to recover it. 00:30:57.177 [2024-11-28 08:29:54.178754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.177 [2024-11-28 08:29:54.178783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.177 qpair failed and we were unable to recover it. 00:30:57.177 [2024-11-28 08:29:54.179058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.177 [2024-11-28 08:29:54.179089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.177 qpair failed and we were unable to recover it. 00:30:57.177 [2024-11-28 08:29:54.179495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.177 [2024-11-28 08:29:54.179526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.177 qpair failed and we were unable to recover it. 00:30:57.177 [2024-11-28 08:29:54.179896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.177 [2024-11-28 08:29:54.179925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.177 qpair failed and we were unable to recover it. 00:30:57.177 [2024-11-28 08:29:54.180342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.177 [2024-11-28 08:29:54.180372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.177 qpair failed and we were unable to recover it. 00:30:57.177 [2024-11-28 08:29:54.180655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.177 [2024-11-28 08:29:54.180684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.177 qpair failed and we were unable to recover it. 00:30:57.177 [2024-11-28 08:29:54.181050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.177 [2024-11-28 08:29:54.181080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.177 qpair failed and we were unable to recover it. 00:30:57.177 [2024-11-28 08:29:54.181466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.177 [2024-11-28 08:29:54.181496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.177 qpair failed and we were unable to recover it. 00:30:57.177 [2024-11-28 08:29:54.181765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.177 [2024-11-28 08:29:54.181794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.177 qpair failed and we were unable to recover it. 00:30:57.177 [2024-11-28 08:29:54.182180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.177 [2024-11-28 08:29:54.182210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.177 qpair failed and we were unable to recover it. 00:30:57.177 [2024-11-28 08:29:54.182579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.177 [2024-11-28 08:29:54.182612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.177 qpair failed and we were unable to recover it. 00:30:57.177 [2024-11-28 08:29:54.182986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.177 [2024-11-28 08:29:54.183015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.177 qpair failed and we were unable to recover it. 00:30:57.177 [2024-11-28 08:29:54.183236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.177 [2024-11-28 08:29:54.183266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.177 qpair failed and we were unable to recover it. 00:30:57.177 [2024-11-28 08:29:54.183540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.177 [2024-11-28 08:29:54.183569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.177 qpair failed and we were unable to recover it. 00:30:57.177 [2024-11-28 08:29:54.183985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.177 [2024-11-28 08:29:54.184014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.177 qpair failed and we were unable to recover it. 00:30:57.177 [2024-11-28 08:29:54.184396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.177 [2024-11-28 08:29:54.184425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.177 qpair failed and we were unable to recover it. 00:30:57.177 [2024-11-28 08:29:54.184804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.177 [2024-11-28 08:29:54.184833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.177 qpair failed and we were unable to recover it. 00:30:57.177 [2024-11-28 08:29:54.185116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.177 [2024-11-28 08:29:54.185144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.177 qpair failed and we were unable to recover it. 00:30:57.177 [2024-11-28 08:29:54.185270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.177 [2024-11-28 08:29:54.185308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.177 qpair failed and we were unable to recover it. 00:30:57.177 Read completed with error (sct=0, sc=8) 00:30:57.177 starting I/O failed 00:30:57.177 Read completed with error (sct=0, sc=8) 00:30:57.177 starting I/O failed 00:30:57.177 Read completed with error (sct=0, sc=8) 00:30:57.177 starting I/O failed 00:30:57.177 Read completed with error (sct=0, sc=8) 00:30:57.177 starting I/O failed 00:30:57.177 Read completed with error (sct=0, sc=8) 00:30:57.177 starting I/O failed 00:30:57.177 Read completed with error (sct=0, sc=8) 00:30:57.177 starting I/O failed 00:30:57.177 Read completed with error (sct=0, sc=8) 00:30:57.177 starting I/O failed 00:30:57.177 Read completed with error (sct=0, sc=8) 00:30:57.177 starting I/O failed 00:30:57.177 Read completed with error (sct=0, sc=8) 00:30:57.177 starting I/O failed 00:30:57.177 Read completed with error (sct=0, sc=8) 00:30:57.177 starting I/O failed 00:30:57.177 Read completed with error (sct=0, sc=8) 00:30:57.177 starting I/O failed 00:30:57.177 Write completed with error (sct=0, sc=8) 00:30:57.177 starting I/O failed 00:30:57.177 Read completed with error (sct=0, sc=8) 00:30:57.177 starting I/O failed 00:30:57.177 Write completed with error (sct=0, sc=8) 00:30:57.177 starting I/O failed 00:30:57.177 Write completed with error (sct=0, sc=8) 00:30:57.177 starting I/O failed 00:30:57.177 Write completed with error (sct=0, sc=8) 00:30:57.177 starting I/O failed 00:30:57.177 Write completed with error (sct=0, sc=8) 00:30:57.177 starting I/O failed 00:30:57.177 Write completed with error (sct=0, sc=8) 00:30:57.177 starting I/O failed 00:30:57.177 Read completed with error (sct=0, sc=8) 00:30:57.177 starting I/O failed 00:30:57.177 Write completed with error (sct=0, sc=8) 00:30:57.177 starting I/O failed 00:30:57.177 Read completed with error (sct=0, sc=8) 00:30:57.177 starting I/O failed 00:30:57.177 Read completed with error (sct=0, sc=8) 00:30:57.177 starting I/O failed 00:30:57.177 Write completed with error (sct=0, sc=8) 00:30:57.177 starting I/O failed 00:30:57.177 Read completed with error (sct=0, sc=8) 00:30:57.177 starting I/O failed 00:30:57.177 Read completed with error (sct=0, sc=8) 00:30:57.177 starting I/O failed 00:30:57.177 Write completed with error (sct=0, sc=8) 00:30:57.177 starting I/O failed 00:30:57.177 Write completed with error (sct=0, sc=8) 00:30:57.177 starting I/O failed 00:30:57.177 Write completed with error (sct=0, sc=8) 00:30:57.177 starting I/O failed 00:30:57.177 Write completed with error (sct=0, sc=8) 00:30:57.177 starting I/O failed 00:30:57.177 Read completed with error (sct=0, sc=8) 00:30:57.177 starting I/O failed 00:30:57.177 Read completed with error (sct=0, sc=8) 00:30:57.177 starting I/O failed 00:30:57.177 Read completed with error (sct=0, sc=8) 00:30:57.177 starting I/O failed 00:30:57.177 [2024-11-28 08:29:54.186111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:57.177 [2024-11-28 08:29:54.186678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.178 [2024-11-28 08:29:54.186803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.178 qpair failed and we were unable to recover it. 00:30:57.178 [2024-11-28 08:29:54.187142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.178 [2024-11-28 08:29:54.187203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.178 qpair failed and we were unable to recover it. 00:30:57.178 [2024-11-28 08:29:54.187592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.178 [2024-11-28 08:29:54.187696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.178 qpair failed and we were unable to recover it. 00:30:57.178 [2024-11-28 08:29:54.187989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.178 [2024-11-28 08:29:54.188027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.178 qpair failed and we were unable to recover it. 00:30:57.178 [2024-11-28 08:29:54.188467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.178 [2024-11-28 08:29:54.188573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.178 qpair failed and we were unable to recover it. 00:30:57.178 [2024-11-28 08:29:54.189030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.178 [2024-11-28 08:29:54.189068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.178 qpair failed and we were unable to recover it. 00:30:57.178 [2024-11-28 08:29:54.189485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.178 [2024-11-28 08:29:54.189518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.178 qpair failed and we were unable to recover it. 00:30:57.178 [2024-11-28 08:29:54.189893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.178 [2024-11-28 08:29:54.189924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.178 qpair failed and we were unable to recover it. 00:30:57.178 [2024-11-28 08:29:54.190182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.178 [2024-11-28 08:29:54.190214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.178 qpair failed and we were unable to recover it. 00:30:57.178 [2024-11-28 08:29:54.190568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.178 [2024-11-28 08:29:54.190598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.178 qpair failed and we were unable to recover it. 00:30:57.178 [2024-11-28 08:29:54.190957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.178 [2024-11-28 08:29:54.190988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.178 qpair failed and we were unable to recover it. 00:30:57.178 [2024-11-28 08:29:54.191533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.178 [2024-11-28 08:29:54.191639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.178 qpair failed and we were unable to recover it. 00:30:57.178 [2024-11-28 08:29:54.192117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.178 [2024-11-28 08:29:54.192155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.178 qpair failed and we were unable to recover it. 00:30:57.178 [2024-11-28 08:29:54.192454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.178 [2024-11-28 08:29:54.192485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.178 qpair failed and we were unable to recover it. 00:30:57.178 [2024-11-28 08:29:54.192835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.178 [2024-11-28 08:29:54.192867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.178 qpair failed and we were unable to recover it. 00:30:57.178 [2024-11-28 08:29:54.193244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.178 [2024-11-28 08:29:54.193277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.178 qpair failed and we were unable to recover it. 00:30:57.178 [2024-11-28 08:29:54.193556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.178 [2024-11-28 08:29:54.193587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.178 qpair failed and we were unable to recover it. 00:30:57.178 [2024-11-28 08:29:54.193969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.178 [2024-11-28 08:29:54.193999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.178 qpair failed and we were unable to recover it. 00:30:57.178 [2024-11-28 08:29:54.194399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.178 [2024-11-28 08:29:54.194431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.178 qpair failed and we were unable to recover it. 00:30:57.178 [2024-11-28 08:29:54.194818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.178 [2024-11-28 08:29:54.194849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.178 qpair failed and we were unable to recover it. 00:30:57.178 [2024-11-28 08:29:54.195241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.178 [2024-11-28 08:29:54.195272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.178 qpair failed and we were unable to recover it. 00:30:57.178 [2024-11-28 08:29:54.195638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.178 [2024-11-28 08:29:54.195668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.178 qpair failed and we were unable to recover it. 00:30:57.178 [2024-11-28 08:29:54.195911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.178 [2024-11-28 08:29:54.195940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.178 qpair failed and we were unable to recover it. 00:30:57.178 [2024-11-28 08:29:54.196229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.178 [2024-11-28 08:29:54.196260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.178 qpair failed and we were unable to recover it. 00:30:57.178 [2024-11-28 08:29:54.196604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.178 [2024-11-28 08:29:54.196633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.178 qpair failed and we were unable to recover it. 00:30:57.178 [2024-11-28 08:29:54.197018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.178 [2024-11-28 08:29:54.197048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.178 qpair failed and we were unable to recover it. 00:30:57.178 [2024-11-28 08:29:54.197464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.178 [2024-11-28 08:29:54.197496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.178 qpair failed and we were unable to recover it. 00:30:57.178 [2024-11-28 08:29:54.197850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.178 [2024-11-28 08:29:54.197879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.178 qpair failed and we were unable to recover it. 00:30:57.178 [2024-11-28 08:29:54.198145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.178 [2024-11-28 08:29:54.198193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.178 qpair failed and we were unable to recover it. 00:30:57.178 [2024-11-28 08:29:54.198550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.178 [2024-11-28 08:29:54.198579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.178 qpair failed and we were unable to recover it. 00:30:57.178 [2024-11-28 08:29:54.198959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.178 [2024-11-28 08:29:54.198988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.178 qpair failed and we were unable to recover it. 00:30:57.178 [2024-11-28 08:29:54.199217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.179 [2024-11-28 08:29:54.199248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.179 qpair failed and we were unable to recover it. 00:30:57.179 [2024-11-28 08:29:54.199639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.179 [2024-11-28 08:29:54.199677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.179 qpair failed and we were unable to recover it. 00:30:57.179 [2024-11-28 08:29:54.200030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.179 [2024-11-28 08:29:54.200060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.179 qpair failed and we were unable to recover it. 00:30:57.179 [2024-11-28 08:29:54.200304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.179 [2024-11-28 08:29:54.200335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.179 qpair failed and we were unable to recover it. 00:30:57.179 [2024-11-28 08:29:54.200590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.179 [2024-11-28 08:29:54.200618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.179 qpair failed and we were unable to recover it. 00:30:57.179 [2024-11-28 08:29:54.200936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.179 [2024-11-28 08:29:54.200965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.179 qpair failed and we were unable to recover it. 00:30:57.179 [2024-11-28 08:29:54.201343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.179 [2024-11-28 08:29:54.201374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.179 qpair failed and we were unable to recover it. 00:30:57.179 [2024-11-28 08:29:54.201756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.179 [2024-11-28 08:29:54.201786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.179 qpair failed and we were unable to recover it. 00:30:57.179 [2024-11-28 08:29:54.202016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.179 [2024-11-28 08:29:54.202047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.179 qpair failed and we were unable to recover it. 00:30:57.179 [2024-11-28 08:29:54.202321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.179 [2024-11-28 08:29:54.202353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.179 qpair failed and we were unable to recover it. 00:30:57.179 [2024-11-28 08:29:54.202738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.179 [2024-11-28 08:29:54.202767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.179 qpair failed and we were unable to recover it. 00:30:57.179 [2024-11-28 08:29:54.203147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.179 [2024-11-28 08:29:54.203190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.179 qpair failed and we were unable to recover it. 00:30:57.179 [2024-11-28 08:29:54.203427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.179 [2024-11-28 08:29:54.203459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.179 qpair failed and we were unable to recover it. 00:30:57.179 [2024-11-28 08:29:54.203847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.179 [2024-11-28 08:29:54.203877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.179 qpair failed and we were unable to recover it. 00:30:57.179 [2024-11-28 08:29:54.204250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.179 [2024-11-28 08:29:54.204282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.179 qpair failed and we were unable to recover it. 00:30:57.179 [2024-11-28 08:29:54.204660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.179 [2024-11-28 08:29:54.204689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.179 qpair failed and we were unable to recover it. 00:30:57.179 [2024-11-28 08:29:54.205071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.179 [2024-11-28 08:29:54.205100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.179 qpair failed and we were unable to recover it. 00:30:57.179 [2024-11-28 08:29:54.205482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.179 [2024-11-28 08:29:54.205512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.179 qpair failed and we were unable to recover it. 00:30:57.179 [2024-11-28 08:29:54.205878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.179 [2024-11-28 08:29:54.205908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.179 qpair failed and we were unable to recover it. 00:30:57.179 [2024-11-28 08:29:54.206156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.179 [2024-11-28 08:29:54.206202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.179 qpair failed and we were unable to recover it. 00:30:57.179 [2024-11-28 08:29:54.206567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.179 [2024-11-28 08:29:54.206597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.179 qpair failed and we were unable to recover it. 00:30:57.179 [2024-11-28 08:29:54.206978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.179 [2024-11-28 08:29:54.207007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.179 qpair failed and we were unable to recover it. 00:30:57.179 [2024-11-28 08:29:54.207374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.179 [2024-11-28 08:29:54.207405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.179 qpair failed and we were unable to recover it. 00:30:57.179 [2024-11-28 08:29:54.207795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.179 [2024-11-28 08:29:54.207824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.179 qpair failed and we were unable to recover it. 00:30:57.179 [2024-11-28 08:29:54.208153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.179 [2024-11-28 08:29:54.208194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.179 qpair failed and we were unable to recover it. 00:30:57.179 [2024-11-28 08:29:54.208576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.179 [2024-11-28 08:29:54.208605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.179 qpair failed and we were unable to recover it. 00:30:57.179 [2024-11-28 08:29:54.208987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.179 [2024-11-28 08:29:54.209016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.179 qpair failed and we were unable to recover it. 00:30:57.179 [2024-11-28 08:29:54.209276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.179 [2024-11-28 08:29:54.209311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.179 qpair failed and we were unable to recover it. 00:30:57.179 [2024-11-28 08:29:54.209727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.179 [2024-11-28 08:29:54.209757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.179 qpair failed and we were unable to recover it. 00:30:57.179 [2024-11-28 08:29:54.210113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.179 [2024-11-28 08:29:54.210142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.179 qpair failed and we were unable to recover it. 00:30:57.179 [2024-11-28 08:29:54.210529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.179 [2024-11-28 08:29:54.210558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.179 qpair failed and we were unable to recover it. 00:30:57.179 [2024-11-28 08:29:54.210975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.179 [2024-11-28 08:29:54.211005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.179 qpair failed and we were unable to recover it. 00:30:57.180 [2024-11-28 08:29:54.211374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.180 [2024-11-28 08:29:54.211406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.180 qpair failed and we were unable to recover it. 00:30:57.180 [2024-11-28 08:29:54.211678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.180 [2024-11-28 08:29:54.211707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.180 qpair failed and we were unable to recover it. 00:30:57.180 [2024-11-28 08:29:54.212095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.180 [2024-11-28 08:29:54.212125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.180 qpair failed and we were unable to recover it. 00:30:57.180 [2024-11-28 08:29:54.212428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.180 [2024-11-28 08:29:54.212458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.180 qpair failed and we were unable to recover it. 00:30:57.180 [2024-11-28 08:29:54.212841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.180 [2024-11-28 08:29:54.212871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.180 qpair failed and we were unable to recover it. 00:30:57.180 [2024-11-28 08:29:54.213237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.180 [2024-11-28 08:29:54.213268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.180 qpair failed and we were unable to recover it. 00:30:57.180 [2024-11-28 08:29:54.213613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.180 [2024-11-28 08:29:54.213642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.180 qpair failed and we were unable to recover it. 00:30:57.180 [2024-11-28 08:29:54.213991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.180 [2024-11-28 08:29:54.214021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.180 qpair failed and we were unable to recover it. 00:30:57.180 [2024-11-28 08:29:54.214389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.180 [2024-11-28 08:29:54.214419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.180 qpair failed and we were unable to recover it. 00:30:57.180 [2024-11-28 08:29:54.214791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.180 [2024-11-28 08:29:54.214826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.180 qpair failed and we were unable to recover it. 00:30:57.180 [2024-11-28 08:29:54.215189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.180 [2024-11-28 08:29:54.215221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.180 qpair failed and we were unable to recover it. 00:30:57.180 [2024-11-28 08:29:54.215584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.180 [2024-11-28 08:29:54.215613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.180 qpair failed and we were unable to recover it. 00:30:57.180 [2024-11-28 08:29:54.216019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.180 [2024-11-28 08:29:54.216048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.180 qpair failed and we were unable to recover it. 00:30:57.180 [2024-11-28 08:29:54.216275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.180 [2024-11-28 08:29:54.216308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.180 qpair failed and we were unable to recover it. 00:30:57.180 [2024-11-28 08:29:54.216752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.180 [2024-11-28 08:29:54.216781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.180 qpair failed and we were unable to recover it. 00:30:57.180 [2024-11-28 08:29:54.217145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.180 [2024-11-28 08:29:54.217184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.180 qpair failed and we were unable to recover it. 00:30:57.180 [2024-11-28 08:29:54.217451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.180 [2024-11-28 08:29:54.217480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.180 qpair failed and we were unable to recover it. 00:30:57.180 [2024-11-28 08:29:54.217853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.180 [2024-11-28 08:29:54.217882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.180 qpair failed and we were unable to recover it. 00:30:57.180 [2024-11-28 08:29:54.218243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.180 [2024-11-28 08:29:54.218274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.180 qpair failed and we were unable to recover it. 00:30:57.180 [2024-11-28 08:29:54.218702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.180 [2024-11-28 08:29:54.218732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.180 qpair failed and we were unable to recover it. 00:30:57.180 [2024-11-28 08:29:54.219045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.180 [2024-11-28 08:29:54.219074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.180 qpair failed and we were unable to recover it. 00:30:57.180 [2024-11-28 08:29:54.219468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.180 [2024-11-28 08:29:54.219498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.180 qpair failed and we were unable to recover it. 00:30:57.180 [2024-11-28 08:29:54.219886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.180 [2024-11-28 08:29:54.219916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.180 qpair failed and we were unable to recover it. 00:30:57.180 [2024-11-28 08:29:54.220277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.180 [2024-11-28 08:29:54.220308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.180 qpair failed and we were unable to recover it. 00:30:57.180 [2024-11-28 08:29:54.220732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.180 [2024-11-28 08:29:54.220763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.180 qpair failed and we were unable to recover it. 00:30:57.180 [2024-11-28 08:29:54.221117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.180 [2024-11-28 08:29:54.221147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.180 qpair failed and we were unable to recover it. 00:30:57.180 [2024-11-28 08:29:54.221518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.180 [2024-11-28 08:29:54.221548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.180 qpair failed and we were unable to recover it. 00:30:57.180 [2024-11-28 08:29:54.221896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.221926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.222091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.222125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.222514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.222547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.222896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.222925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.223328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.223358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.223722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.223751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.224133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.224177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.224513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.224542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.224926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.224955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.225258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.225289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.225694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.225724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.226120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.226151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.226538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.226569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.226966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.226995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.227313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.227343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.227702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.227731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.228107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.228136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.228579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.228609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.229050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.229080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.229417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.229449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.229815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.229845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.230204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.230235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.230507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.230545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.230902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.230931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.231305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.231335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.231707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.231736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.232002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.232031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.232406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.232435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.232804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.232833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.233206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.233237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.233622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.233651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.234044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.234073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.234434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.234464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.234850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.234879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.235269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.235300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.235693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.235722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.236083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.236113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.236493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.181 [2024-11-28 08:29:54.236523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.181 qpair failed and we were unable to recover it. 00:30:57.181 [2024-11-28 08:29:54.236779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.182 [2024-11-28 08:29:54.236811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.182 qpair failed and we were unable to recover it. 00:30:57.182 [2024-11-28 08:29:54.237259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.182 [2024-11-28 08:29:54.237289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.182 qpair failed and we were unable to recover it. 00:30:57.182 [2024-11-28 08:29:54.237676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.182 [2024-11-28 08:29:54.237705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.182 qpair failed and we were unable to recover it. 00:30:57.182 [2024-11-28 08:29:54.238061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.182 [2024-11-28 08:29:54.238090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.182 qpair failed and we were unable to recover it. 00:30:57.182 [2024-11-28 08:29:54.238472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.182 [2024-11-28 08:29:54.238503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.182 qpair failed and we were unable to recover it. 00:30:57.182 [2024-11-28 08:29:54.238856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.182 [2024-11-28 08:29:54.238885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.182 qpair failed and we were unable to recover it. 00:30:57.182 [2024-11-28 08:29:54.239253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.182 [2024-11-28 08:29:54.239284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.182 qpair failed and we were unable to recover it. 00:30:57.182 [2024-11-28 08:29:54.239670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.182 [2024-11-28 08:29:54.239698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.182 qpair failed and we were unable to recover it. 00:30:57.182 [2024-11-28 08:29:54.240135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.182 [2024-11-28 08:29:54.240175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.182 qpair failed and we were unable to recover it. 00:30:57.182 [2024-11-28 08:29:54.240396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.182 [2024-11-28 08:29:54.240424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.182 qpair failed and we were unable to recover it. 00:30:57.182 [2024-11-28 08:29:54.240684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.182 [2024-11-28 08:29:54.240712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.182 qpair failed and we were unable to recover it. 00:30:57.182 [2024-11-28 08:29:54.240962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.182 [2024-11-28 08:29:54.240991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.182 qpair failed and we were unable to recover it. 00:30:57.182 [2024-11-28 08:29:54.241340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.182 [2024-11-28 08:29:54.241371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.182 qpair failed and we were unable to recover it. 00:30:57.182 [2024-11-28 08:29:54.241741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.182 [2024-11-28 08:29:54.241769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.182 qpair failed and we were unable to recover it. 00:30:57.182 [2024-11-28 08:29:54.242140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.182 [2024-11-28 08:29:54.242191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.182 qpair failed and we were unable to recover it. 00:30:57.182 [2024-11-28 08:29:54.242540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.182 [2024-11-28 08:29:54.242570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.182 qpair failed and we were unable to recover it. 00:30:57.182 [2024-11-28 08:29:54.242939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.182 [2024-11-28 08:29:54.242967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.182 qpair failed and we were unable to recover it. 00:30:57.182 [2024-11-28 08:29:54.243286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.182 [2024-11-28 08:29:54.243318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.182 qpair failed and we were unable to recover it. 00:30:57.182 [2024-11-28 08:29:54.243593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:57.182 [2024-11-28 08:29:54.243694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.182 [2024-11-28 08:29:54.243723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.182 qpair failed and we were unable to recover it. 00:30:57.182 [2024-11-28 08:29:54.244091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.182 [2024-11-28 08:29:54.244120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.182 qpair failed and we were unable to recover it. 00:30:57.182 [2024-11-28 08:29:54.244457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.182 [2024-11-28 08:29:54.244488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.182 qpair failed and we were unable to recover it. 00:30:57.182 [2024-11-28 08:29:54.244839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.182 [2024-11-28 08:29:54.244867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.182 qpair failed and we were unable to recover it. 00:30:57.182 [2024-11-28 08:29:54.245105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.182 [2024-11-28 08:29:54.245133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.182 qpair failed and we were unable to recover it. 00:30:57.182 [2024-11-28 08:29:54.245526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.182 [2024-11-28 08:29:54.245556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.182 qpair failed and we were unable to recover it. 00:30:57.182 [2024-11-28 08:29:54.246013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.182 [2024-11-28 08:29:54.246048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.182 qpair failed and we were unable to recover it. 00:30:57.182 [2024-11-28 08:29:54.246399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.182 [2024-11-28 08:29:54.246430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.182 qpair failed and we were unable to recover it. 00:30:57.182 [2024-11-28 08:29:54.246822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.182 [2024-11-28 08:29:54.246852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.182 qpair failed and we were unable to recover it. 00:30:57.182 [2024-11-28 08:29:54.247217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.182 [2024-11-28 08:29:54.247248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.182 qpair failed and we were unable to recover it. 00:30:57.182 [2024-11-28 08:29:54.247536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.182 [2024-11-28 08:29:54.247565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.182 qpair failed and we were unable to recover it. 00:30:57.182 [2024-11-28 08:29:54.247917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.182 [2024-11-28 08:29:54.247946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.182 qpair failed and we were unable to recover it. 00:30:57.182 [2024-11-28 08:29:54.248412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.182 [2024-11-28 08:29:54.248442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.182 qpair failed and we were unable to recover it. 00:30:57.182 [2024-11-28 08:29:54.248825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.182 [2024-11-28 08:29:54.248853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.182 qpair failed and we were unable to recover it. 00:30:57.182 [2024-11-28 08:29:54.249112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.182 [2024-11-28 08:29:54.249142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.182 qpair failed and we were unable to recover it. 00:30:57.182 [2024-11-28 08:29:54.249562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.182 [2024-11-28 08:29:54.249592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.182 qpair failed and we were unable to recover it. 00:30:57.182 [2024-11-28 08:29:54.249975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.182 [2024-11-28 08:29:54.250004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.182 qpair failed and we were unable to recover it. 00:30:57.182 [2024-11-28 08:29:54.250393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.182 [2024-11-28 08:29:54.250425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.182 qpair failed and we were unable to recover it. 00:30:57.182 [2024-11-28 08:29:54.250770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.250799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.183 qpair failed and we were unable to recover it. 00:30:57.183 [2024-11-28 08:29:54.251155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.251200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.183 qpair failed and we were unable to recover it. 00:30:57.183 [2024-11-28 08:29:54.251570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.251601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.183 qpair failed and we were unable to recover it. 00:30:57.183 [2024-11-28 08:29:54.251977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.252006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.183 qpair failed and we were unable to recover it. 00:30:57.183 [2024-11-28 08:29:54.252395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.252425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.183 qpair failed and we were unable to recover it. 00:30:57.183 [2024-11-28 08:29:54.252810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.252839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.183 qpair failed and we were unable to recover it. 00:30:57.183 [2024-11-28 08:29:54.253205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.253237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.183 qpair failed and we were unable to recover it. 00:30:57.183 [2024-11-28 08:29:54.253625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.253653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.183 qpair failed and we were unable to recover it. 00:30:57.183 [2024-11-28 08:29:54.254039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.254069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.183 qpair failed and we were unable to recover it. 00:30:57.183 [2024-11-28 08:29:54.254432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.254462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.183 qpair failed and we were unable to recover it. 00:30:57.183 [2024-11-28 08:29:54.254837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.254866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.183 qpair failed and we were unable to recover it. 00:30:57.183 [2024-11-28 08:29:54.255237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.255267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.183 qpair failed and we were unable to recover it. 00:30:57.183 [2024-11-28 08:29:54.255656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.255686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.183 qpair failed and we were unable to recover it. 00:30:57.183 [2024-11-28 08:29:54.255936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.255969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.183 qpair failed and we were unable to recover it. 00:30:57.183 [2024-11-28 08:29:54.256345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.256376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.183 qpair failed and we were unable to recover it. 00:30:57.183 [2024-11-28 08:29:54.256744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.256776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.183 qpair failed and we were unable to recover it. 00:30:57.183 [2024-11-28 08:29:54.257017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.257046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.183 qpair failed and we were unable to recover it. 00:30:57.183 [2024-11-28 08:29:54.257332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.257364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.183 qpair failed and we were unable to recover it. 00:30:57.183 [2024-11-28 08:29:54.257773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.257802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.183 qpair failed and we were unable to recover it. 00:30:57.183 [2024-11-28 08:29:54.258198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.258230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.183 qpair failed and we were unable to recover it. 00:30:57.183 [2024-11-28 08:29:54.258598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.258636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.183 qpair failed and we were unable to recover it. 00:30:57.183 [2024-11-28 08:29:54.259001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.259030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.183 qpair failed and we were unable to recover it. 00:30:57.183 [2024-11-28 08:29:54.259291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.259324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.183 qpair failed and we were unable to recover it. 00:30:57.183 [2024-11-28 08:29:54.259713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.259743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.183 qpair failed and we were unable to recover it. 00:30:57.183 [2024-11-28 08:29:54.260001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.260030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.183 qpair failed and we were unable to recover it. 00:30:57.183 [2024-11-28 08:29:54.260426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.260457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.183 qpair failed and we were unable to recover it. 00:30:57.183 [2024-11-28 08:29:54.260835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.260865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.183 qpair failed and we were unable to recover it. 00:30:57.183 [2024-11-28 08:29:54.261221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.261252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.183 qpair failed and we were unable to recover it. 00:30:57.183 [2024-11-28 08:29:54.261608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.261645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.183 qpair failed and we were unable to recover it. 00:30:57.183 [2024-11-28 08:29:54.262001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.262031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.183 qpair failed and we were unable to recover it. 00:30:57.183 [2024-11-28 08:29:54.262401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.262431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.183 qpair failed and we were unable to recover it. 00:30:57.183 [2024-11-28 08:29:54.262815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.262846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.183 qpair failed and we were unable to recover it. 00:30:57.183 [2024-11-28 08:29:54.263070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.263099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.183 qpair failed and we were unable to recover it. 00:30:57.183 [2024-11-28 08:29:54.263455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.263485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.183 qpair failed and we were unable to recover it. 00:30:57.183 [2024-11-28 08:29:54.263849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.263879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.183 qpair failed and we were unable to recover it. 00:30:57.183 [2024-11-28 08:29:54.264192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.264225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.183 qpair failed and we were unable to recover it. 00:30:57.183 [2024-11-28 08:29:54.264620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.183 [2024-11-28 08:29:54.264649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.184 qpair failed and we were unable to recover it. 00:30:57.184 [2024-11-28 08:29:54.265018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.184 [2024-11-28 08:29:54.265049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.184 qpair failed and we were unable to recover it. 00:30:57.184 [2024-11-28 08:29:54.265399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.184 [2024-11-28 08:29:54.265431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.184 qpair failed and we were unable to recover it. 00:30:57.184 [2024-11-28 08:29:54.265786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.184 [2024-11-28 08:29:54.265816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.184 qpair failed and we were unable to recover it. 00:30:57.184 [2024-11-28 08:29:54.266184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.184 [2024-11-28 08:29:54.266215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.184 qpair failed and we were unable to recover it. 00:30:57.184 [2024-11-28 08:29:54.266494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.184 [2024-11-28 08:29:54.266524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.184 qpair failed and we were unable to recover it. 00:30:57.184 [2024-11-28 08:29:54.266862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.184 [2024-11-28 08:29:54.266892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.184 qpair failed and we were unable to recover it. 00:30:57.184 [2024-11-28 08:29:54.267197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.184 [2024-11-28 08:29:54.267228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.184 qpair failed and we were unable to recover it. 00:30:57.184 [2024-11-28 08:29:54.267577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.184 [2024-11-28 08:29:54.267607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.184 qpair failed and we were unable to recover it. 00:30:57.184 [2024-11-28 08:29:54.268043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.184 [2024-11-28 08:29:54.268072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.184 qpair failed and we were unable to recover it. 00:30:57.184 [2024-11-28 08:29:54.268371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.184 [2024-11-28 08:29:54.268400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.184 qpair failed and we were unable to recover it. 00:30:57.184 [2024-11-28 08:29:54.268654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.184 [2024-11-28 08:29:54.268687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.184 qpair failed and we were unable to recover it. 00:30:57.184 [2024-11-28 08:29:54.269076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.184 [2024-11-28 08:29:54.269105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.184 qpair failed and we were unable to recover it. 00:30:57.184 [2024-11-28 08:29:54.269467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.184 [2024-11-28 08:29:54.269497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.184 qpair failed and we were unable to recover it. 00:30:57.184 [2024-11-28 08:29:54.269866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.184 [2024-11-28 08:29:54.269896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.184 qpair failed and we were unable to recover it. 00:30:57.184 [2024-11-28 08:29:54.270254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.184 [2024-11-28 08:29:54.270285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.184 qpair failed and we were unable to recover it. 00:30:57.184 [2024-11-28 08:29:54.270555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.184 [2024-11-28 08:29:54.270584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.184 qpair failed and we were unable to recover it. 00:30:57.184 [2024-11-28 08:29:54.270925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.184 [2024-11-28 08:29:54.270956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.184 qpair failed and we were unable to recover it. 00:30:57.184 [2024-11-28 08:29:54.271321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.184 [2024-11-28 08:29:54.271353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.184 qpair failed and we were unable to recover it. 00:30:57.184 [2024-11-28 08:29:54.271711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.184 [2024-11-28 08:29:54.271741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.184 qpair failed and we were unable to recover it. 00:30:57.184 [2024-11-28 08:29:54.272107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.184 [2024-11-28 08:29:54.272135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.184 qpair failed and we were unable to recover it. 00:30:57.184 [2024-11-28 08:29:54.272543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.184 [2024-11-28 08:29:54.272573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.184 qpair failed and we were unable to recover it. 00:30:57.184 [2024-11-28 08:29:54.272941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.184 [2024-11-28 08:29:54.272972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.184 qpair failed and we were unable to recover it. 00:30:57.184 [2024-11-28 08:29:54.273343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.184 [2024-11-28 08:29:54.273373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.184 qpair failed and we were unable to recover it. 00:30:57.184 [2024-11-28 08:29:54.273721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.184 [2024-11-28 08:29:54.273751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.184 qpair failed and we were unable to recover it. 00:30:57.184 [2024-11-28 08:29:54.274119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.184 [2024-11-28 08:29:54.274148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.184 qpair failed and we were unable to recover it. 00:30:57.184 [2024-11-28 08:29:54.274431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.184 [2024-11-28 08:29:54.274461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.184 qpair failed and we were unable to recover it. 00:30:57.184 [2024-11-28 08:29:54.274851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.184 [2024-11-28 08:29:54.274879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.184 qpair failed and we were unable to recover it. 00:30:57.184 [2024-11-28 08:29:54.275240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.184 [2024-11-28 08:29:54.275271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.184 qpair failed and we were unable to recover it. 00:30:57.184 [2024-11-28 08:29:54.275643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.184 [2024-11-28 08:29:54.275673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.184 qpair failed and we were unable to recover it. 00:30:57.184 [2024-11-28 08:29:54.276045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.184 [2024-11-28 08:29:54.276074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.184 qpair failed and we were unable to recover it. 00:30:57.184 [2024-11-28 08:29:54.276458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.184 [2024-11-28 08:29:54.276488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.184 qpair failed and we were unable to recover it. 00:30:57.184 [2024-11-28 08:29:54.276823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.184 [2024-11-28 08:29:54.276858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.184 qpair failed and we were unable to recover it. 00:30:57.184 [2024-11-28 08:29:54.277222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.277252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.277626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.277657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.278030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.278060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.278283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.278312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.278565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.278594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.278966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.278995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.279340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.279370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.279736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.279765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.280130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.280168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.280507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.280536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.280902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.280931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.281302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.281332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.281702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.281731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.281944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.281972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.282350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.282380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.282752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.282783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.283142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.283182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.283544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.283573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.283980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.284009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.284365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.284397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.284760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.284790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.285150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.285190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.285549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.285577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.285922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.285951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.286282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.286311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.286658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.286687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.287049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.287080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.287450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.287480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.287849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.287877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.288257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.288287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.288644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.288673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.289048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.289078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.289494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.289525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.289755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.289784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.290144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.290185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.290526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.290555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.290917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.290946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.291316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.185 [2024-11-28 08:29:54.291346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.185 qpair failed and we were unable to recover it. 00:30:57.185 [2024-11-28 08:29:54.291611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.186 [2024-11-28 08:29:54.291645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.186 qpair failed and we were unable to recover it. 00:30:57.186 [2024-11-28 08:29:54.291909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.186 [2024-11-28 08:29:54.291949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.186 qpair failed and we were unable to recover it. 00:30:57.186 [2024-11-28 08:29:54.292307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.186 [2024-11-28 08:29:54.292339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.186 qpair failed and we were unable to recover it. 00:30:57.186 [2024-11-28 08:29:54.292708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.186 [2024-11-28 08:29:54.292738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.186 qpair failed and we were unable to recover it. 00:30:57.186 [2024-11-28 08:29:54.293096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.186 [2024-11-28 08:29:54.293128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.186 qpair failed and we were unable to recover it. 00:30:57.186 [2024-11-28 08:29:54.293525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.186 [2024-11-28 08:29:54.293555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.186 qpair failed and we were unable to recover it. 00:30:57.186 [2024-11-28 08:29:54.293899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.186 [2024-11-28 08:29:54.293930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.186 qpair failed and we were unable to recover it. 00:30:57.186 [2024-11-28 08:29:54.294270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.186 [2024-11-28 08:29:54.294302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.186 qpair failed and we were unable to recover it. 00:30:57.186 [2024-11-28 08:29:54.294680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.186 [2024-11-28 08:29:54.294710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.186 qpair failed and we were unable to recover it. 00:30:57.186 [2024-11-28 08:29:54.295058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.186 [2024-11-28 08:29:54.295088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.186 qpair failed and we were unable to recover it. 00:30:57.186 [2024-11-28 08:29:54.295450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.186 [2024-11-28 08:29:54.295481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.186 qpair failed and we were unable to recover it. 00:30:57.186 [2024-11-28 08:29:54.295869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.186 [2024-11-28 08:29:54.295899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.186 qpair failed and we were unable to recover it. 00:30:57.186 [2024-11-28 08:29:54.296236] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:57.186 [2024-11-28 08:29:54.296263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.186 [2024-11-28 08:29:54.296284] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:57.186 [2024-11-28 08:29:54.296295] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:57.186 [2024-11-28 08:29:54.296294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.186 [2024-11-28 08:29:54.296304] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:57.186 [2024-11-28 08:29:54.296312] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:57.186 qpair failed and we were unable to recover it. 00:30:57.186 [2024-11-28 08:29:54.296671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.186 [2024-11-28 08:29:54.296700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.186 qpair failed and we were unable to recover it. 00:30:57.186 [2024-11-28 08:29:54.297077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.186 [2024-11-28 08:29:54.297106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.186 qpair failed and we were unable to recover it. 00:30:57.186 [2024-11-28 08:29:54.297484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.186 [2024-11-28 08:29:54.297514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.186 qpair failed and we were unable to recover it. 00:30:57.186 [2024-11-28 08:29:54.297761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.186 [2024-11-28 08:29:54.297793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.186 qpair failed and we were unable to recover it. 00:30:57.186 [2024-11-28 08:29:54.298178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.186 [2024-11-28 08:29:54.298209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.186 qpair failed and we were unable to recover it. 00:30:57.186 [2024-11-28 08:29:54.298539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.186 [2024-11-28 08:29:54.298568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.186 qpair failed and we were unable to recover it. 00:30:57.186 [2024-11-28 08:29:54.298588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:57.186 [2024-11-28 08:29:54.298861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.186 [2024-11-28 08:29:54.298889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b9[2024-11-28 08:29:54.298739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:57.186 0 with addr=10.0.0.2, port=4420 00:30:57.186 [2024-11-28 08:29:54.298832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:57.186 qpair failed and we were unable to recover it. 00:30:57.186 [2024-11-28 08:29:54.298832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:57.186 [2024-11-28 08:29:54.299280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.186 [2024-11-28 08:29:54.299311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.186 qpair failed and we were unable to recover it. 00:30:57.186 [2024-11-28 08:29:54.299674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.186 [2024-11-28 08:29:54.299702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.186 qpair failed and we were unable to recover it. 00:30:57.186 [2024-11-28 08:29:54.300090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.186 [2024-11-28 08:29:54.300119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.186 qpair failed and we were unable to recover it. 00:30:57.186 [2024-11-28 08:29:54.300491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.186 [2024-11-28 08:29:54.300523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.186 qpair failed and we were unable to recover it. 00:30:57.186 [2024-11-28 08:29:54.300757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.186 [2024-11-28 08:29:54.300785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.186 qpair failed and we were unable to recover it. 00:30:57.186 [2024-11-28 08:29:54.300943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.186 [2024-11-28 08:29:54.300971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.186 qpair failed and we were unable to recover it. 00:30:57.186 [2024-11-28 08:29:54.301321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.186 [2024-11-28 08:29:54.301353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.186 qpair failed and we were unable to recover it. 00:30:57.186 [2024-11-28 08:29:54.301720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.186 [2024-11-28 08:29:54.301750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.186 qpair failed and we were unable to recover it. 00:30:57.186 [2024-11-28 08:29:54.302116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.186 [2024-11-28 08:29:54.302144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.186 qpair failed and we were unable to recover it. 00:30:57.186 [2024-11-28 08:29:54.302507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.186 [2024-11-28 08:29:54.302536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.186 qpair failed and we were unable to recover it. 00:30:57.186 [2024-11-28 08:29:54.302916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.186 [2024-11-28 08:29:54.302946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.186 qpair failed and we were unable to recover it. 00:30:57.186 [2024-11-28 08:29:54.303199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.186 [2024-11-28 08:29:54.303230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.186 qpair failed and we were unable to recover it. 00:30:57.186 [2024-11-28 08:29:54.303585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.186 [2024-11-28 08:29:54.303616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.186 qpair failed and we were unable to recover it. 00:30:57.186 [2024-11-28 08:29:54.303981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.304010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.304375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.304405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.304767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.304796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.305201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.305233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.305559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.305589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.305959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.305988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.306254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.306284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.306684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.306713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.307089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.307119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.307464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.307494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.307852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.307882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.308257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.308287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.308643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.308672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.309041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.309071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.309297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.309328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.309646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.309676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.309939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.309970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.310268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.310298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.310663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.310699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.311078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.311108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.311452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.311482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.311845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.311876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.312116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.312145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.312527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.312557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.312928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.312957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.313319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.313350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.313710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.313740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.313962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.313992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.314401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.314431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.314658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.314687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.315064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.315093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.315511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.315542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.315901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.315930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.316228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.316260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.316623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.316654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.317013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.317041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.317308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.317338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.187 [2024-11-28 08:29:54.317685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.187 [2024-11-28 08:29:54.317714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.187 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.318079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.318107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.318469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.318501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.318720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.318750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.318983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.319012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.319336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.319367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.319740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.319770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.320135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.320176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.320541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.320571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.320847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.320876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.321105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.321134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.321395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.321425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.321794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.321823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.322086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.322115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.322380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.322411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.322777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.322806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.323169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.323201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.323459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.323488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.323856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.323886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.324266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.324297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.324637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.324668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.325033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.325071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.325454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.325484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.325851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.325881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.326254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.326284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.326664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.326692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.327063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.327094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.327462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.327492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.327757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.327789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.328145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.328196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.328432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.328462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.328827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.328857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.329226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.329256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.329647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.329678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.330048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.330078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.330429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.330459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.330837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.330866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.331241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.331272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.188 [2024-11-28 08:29:54.331651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.188 [2024-11-28 08:29:54.331680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.188 qpair failed and we were unable to recover it. 00:30:57.189 [2024-11-28 08:29:54.331934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.189 [2024-11-28 08:29:54.331963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.189 qpair failed and we were unable to recover it. 00:30:57.189 [2024-11-28 08:29:54.332315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.189 [2024-11-28 08:29:54.332347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.189 qpair failed and we were unable to recover it. 00:30:57.189 [2024-11-28 08:29:54.332727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.189 [2024-11-28 08:29:54.332756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.189 qpair failed and we were unable to recover it. 00:30:57.189 [2024-11-28 08:29:54.333108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.189 [2024-11-28 08:29:54.333138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.189 qpair failed and we were unable to recover it. 00:30:57.189 [2024-11-28 08:29:54.333514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.189 [2024-11-28 08:29:54.333544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.189 qpair failed and we were unable to recover it. 00:30:57.189 [2024-11-28 08:29:54.333939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.189 [2024-11-28 08:29:54.333969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.189 qpair failed and we were unable to recover it. 00:30:57.189 [2024-11-28 08:29:54.334325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.189 [2024-11-28 08:29:54.334356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.189 qpair failed and we were unable to recover it. 00:30:57.189 [2024-11-28 08:29:54.334479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.189 [2024-11-28 08:29:54.334506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.189 qpair failed and we were unable to recover it. 00:30:57.189 [2024-11-28 08:29:54.334860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.189 [2024-11-28 08:29:54.334890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.189 qpair failed and we were unable to recover it. 00:30:57.189 [2024-11-28 08:29:54.335272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.189 [2024-11-28 08:29:54.335305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.189 qpair failed and we were unable to recover it. 00:30:57.189 [2024-11-28 08:29:54.335655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.189 [2024-11-28 08:29:54.335684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.189 qpair failed and we were unable to recover it. 00:30:57.189 [2024-11-28 08:29:54.335906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.189 [2024-11-28 08:29:54.335935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.189 qpair failed and we were unable to recover it. 00:30:57.189 [2024-11-28 08:29:54.336311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.189 [2024-11-28 08:29:54.336343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.189 qpair failed and we were unable to recover it. 00:30:57.189 [2024-11-28 08:29:54.336717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.189 [2024-11-28 08:29:54.336746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.189 qpair failed and we were unable to recover it. 00:30:57.189 [2024-11-28 08:29:54.337108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.189 [2024-11-28 08:29:54.337139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.189 qpair failed and we were unable to recover it. 00:30:57.189 [2024-11-28 08:29:54.337513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.189 [2024-11-28 08:29:54.337543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.189 qpair failed and we were unable to recover it. 00:30:57.189 [2024-11-28 08:29:54.337912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.189 [2024-11-28 08:29:54.337942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.189 qpair failed and we were unable to recover it. 00:30:57.189 [2024-11-28 08:29:54.338182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.189 [2024-11-28 08:29:54.338213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.189 qpair failed and we were unable to recover it. 00:30:57.189 [2024-11-28 08:29:54.338594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.189 [2024-11-28 08:29:54.338625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.189 qpair failed and we were unable to recover it. 00:30:57.189 [2024-11-28 08:29:54.338993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.189 [2024-11-28 08:29:54.339023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.189 qpair failed and we were unable to recover it. 00:30:57.189 [2024-11-28 08:29:54.339394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.189 [2024-11-28 08:29:54.339425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.189 qpair failed and we were unable to recover it. 00:30:57.189 [2024-11-28 08:29:54.339783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.189 [2024-11-28 08:29:54.339812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.189 qpair failed and we were unable to recover it. 00:30:57.189 [2024-11-28 08:29:54.340181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.189 [2024-11-28 08:29:54.340218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.189 qpair failed and we were unable to recover it. 00:30:57.189 [2024-11-28 08:29:54.340611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.189 [2024-11-28 08:29:54.340640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.189 qpair failed and we were unable to recover it. 00:30:57.189 [2024-11-28 08:29:54.341019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.189 [2024-11-28 08:29:54.341048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.189 qpair failed and we were unable to recover it. 00:30:57.189 [2024-11-28 08:29:54.341284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.189 [2024-11-28 08:29:54.341314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.189 qpair failed and we were unable to recover it. 00:30:57.189 [2024-11-28 08:29:54.341696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.189 [2024-11-28 08:29:54.341725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.189 qpair failed and we were unable to recover it. 00:30:57.189 [2024-11-28 08:29:54.342085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.189 [2024-11-28 08:29:54.342114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.189 qpair failed and we were unable to recover it. 00:30:57.189 [2024-11-28 08:29:54.342490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.189 [2024-11-28 08:29:54.342521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.189 qpair failed and we were unable to recover it. 00:30:57.189 [2024-11-28 08:29:54.342892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.189 [2024-11-28 08:29:54.342922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.189 qpair failed and we were unable to recover it. 00:30:57.189 [2024-11-28 08:29:54.343147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.189 [2024-11-28 08:29:54.343193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.189 qpair failed and we were unable to recover it. 00:30:57.189 [2024-11-28 08:29:54.343453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.189 [2024-11-28 08:29:54.343482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.189 qpair failed and we were unable to recover it. 00:30:57.189 [2024-11-28 08:29:54.343753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.189 [2024-11-28 08:29:54.343786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.189 qpair failed and we were unable to recover it. 00:30:57.189 [2024-11-28 08:29:54.344125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.344155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.344546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.344577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.344948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.344976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.345429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.345460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.345828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.345858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.346229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.346259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.346483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.346512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.346852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.346882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.347102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.347130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.347508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.347538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.347803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.347834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.348179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.348211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.348577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.348608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.348956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.348987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.349285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.349316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.349668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.349698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.349930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.349966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.350328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.350359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.350587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.350615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.350957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.350992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.351314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.351346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.351557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.351585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.351964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.351993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.352365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.352397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.352637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.352665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.352898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.352927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.353294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.353325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.353687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.353715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.354099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.354128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.354488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.354520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.354907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.354937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.355293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.355326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.355683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.355712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.356085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.356115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.356501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.356533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.356889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.356920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.357304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.357338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.357680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.190 [2024-11-28 08:29:54.357709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.190 qpair failed and we were unable to recover it. 00:30:57.190 [2024-11-28 08:29:54.358074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.358103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.358474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.358505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.358871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.358900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.359119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.359148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.359518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.359547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.359930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.359958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.360181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.360212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.360455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.360483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.360849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.360878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.361127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.361155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.361553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.361583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.361790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.361818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.362043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.362072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.362394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.362425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.362637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.362665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.363034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.363069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.363329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.363359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.363742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.363771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.364131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.364175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.364384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.364414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.364781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.364811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.365188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.365219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.365336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.365366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.365603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.365632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.366006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.366035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.366393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.366423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.366774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.366803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.367032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.367060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.367410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.367441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.367802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.367830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.368067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.368095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.368376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.368406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.368794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.368822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.369183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.369215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.369420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.369449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.369690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.369718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.370082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.370111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.370424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.191 [2024-11-28 08:29:54.370456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.191 qpair failed and we were unable to recover it. 00:30:57.191 [2024-11-28 08:29:54.370830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.370859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.371225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.371255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.371646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.371675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.372027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.372055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.372420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.372449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.372839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.372868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.373221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.373250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.373631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.373661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.374028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.374057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.374313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.374342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.374595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.374624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.374973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.375001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.375396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.375426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.375787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.375816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.376042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.376071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.376456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.376486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.376799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.376828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.377236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.377266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.377626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.377663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.378021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.378049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.378394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.378430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.378801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.378832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.379201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.379232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.379467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.379495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.379890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.379918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.380284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.380313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.380522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.380551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.380911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.380939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.381306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.381335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.381700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.381729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.382103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.382131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.382360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.382389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.382773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.382802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.383183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.383213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.383602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.383632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.383994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.384024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.384245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.384274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.384599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.192 [2024-11-28 08:29:54.384629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.192 qpair failed and we were unable to recover it. 00:30:57.192 [2024-11-28 08:29:54.384851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.193 [2024-11-28 08:29:54.384880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.193 qpair failed and we were unable to recover it. 00:30:57.193 [2024-11-28 08:29:54.385243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.193 [2024-11-28 08:29:54.385273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.193 qpair failed and we were unable to recover it. 00:30:57.193 [2024-11-28 08:29:54.385620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.193 [2024-11-28 08:29:54.385650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.193 qpair failed and we were unable to recover it. 00:30:57.193 [2024-11-28 08:29:54.386005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.193 [2024-11-28 08:29:54.386034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.193 qpair failed and we were unable to recover it. 00:30:57.193 [2024-11-28 08:29:54.386444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.193 [2024-11-28 08:29:54.386473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.193 qpair failed and we were unable to recover it. 00:30:57.193 [2024-11-28 08:29:54.386834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.193 [2024-11-28 08:29:54.386863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.193 qpair failed and we were unable to recover it. 00:30:57.193 [2024-11-28 08:29:54.387226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.193 [2024-11-28 08:29:54.387257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.193 qpair failed and we were unable to recover it. 00:30:57.193 [2024-11-28 08:29:54.387462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.193 [2024-11-28 08:29:54.387489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.193 qpair failed and we were unable to recover it. 00:30:57.193 [2024-11-28 08:29:54.387810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.193 [2024-11-28 08:29:54.387839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.193 qpair failed and we were unable to recover it. 00:30:57.193 [2024-11-28 08:29:54.388101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.193 [2024-11-28 08:29:54.388130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.193 qpair failed and we were unable to recover it. 00:30:57.193 [2024-11-28 08:29:54.388505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.193 [2024-11-28 08:29:54.388534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.193 qpair failed and we were unable to recover it. 00:30:57.193 [2024-11-28 08:29:54.388893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.193 [2024-11-28 08:29:54.388921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.193 qpair failed and we were unable to recover it. 00:30:57.193 [2024-11-28 08:29:54.389260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.193 [2024-11-28 08:29:54.389290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.193 qpair failed and we were unable to recover it. 00:30:57.193 [2024-11-28 08:29:54.389499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.193 [2024-11-28 08:29:54.389528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.193 qpair failed and we were unable to recover it. 00:30:57.193 [2024-11-28 08:29:54.389881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.193 [2024-11-28 08:29:54.389909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.193 qpair failed and we were unable to recover it. 00:30:57.193 [2024-11-28 08:29:54.390255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.193 [2024-11-28 08:29:54.390286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.193 qpair failed and we were unable to recover it. 00:30:57.193 [2024-11-28 08:29:54.390523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.193 [2024-11-28 08:29:54.390552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.193 qpair failed and we were unable to recover it. 00:30:57.193 [2024-11-28 08:29:54.390897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.193 [2024-11-28 08:29:54.390926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.193 qpair failed and we were unable to recover it. 00:30:57.193 [2024-11-28 08:29:54.391279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.193 [2024-11-28 08:29:54.391309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.193 qpair failed and we were unable to recover it. 00:30:57.193 [2024-11-28 08:29:54.391671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.193 [2024-11-28 08:29:54.391701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.193 qpair failed and we were unable to recover it. 00:30:57.193 [2024-11-28 08:29:54.392085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.193 [2024-11-28 08:29:54.392115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.193 qpair failed and we were unable to recover it. 00:30:57.193 [2024-11-28 08:29:54.392558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.193 [2024-11-28 08:29:54.392587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.193 qpair failed and we were unable to recover it. 00:30:57.193 [2024-11-28 08:29:54.392697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.193 [2024-11-28 08:29:54.392736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.193 qpair failed and we were unable to recover it. 00:30:57.193 [2024-11-28 08:29:54.393077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.193 [2024-11-28 08:29:54.393106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.193 qpair failed and we were unable to recover it. 00:30:57.193 [2024-11-28 08:29:54.393252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.193 [2024-11-28 08:29:54.393281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.193 qpair failed and we were unable to recover it. 00:30:57.193 [2024-11-28 08:29:54.393719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.193 [2024-11-28 08:29:54.393749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.193 qpair failed and we were unable to recover it. 00:30:57.193 [2024-11-28 08:29:54.394121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.193 [2024-11-28 08:29:54.394152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.193 qpair failed and we were unable to recover it. 00:30:57.193 [2024-11-28 08:29:54.394526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.193 [2024-11-28 08:29:54.394556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.193 qpair failed and we were unable to recover it. 00:30:57.193 [2024-11-28 08:29:54.394773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.193 [2024-11-28 08:29:54.394801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.193 qpair failed and we were unable to recover it. 00:30:57.193 [2024-11-28 08:29:54.395176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.193 [2024-11-28 08:29:54.395206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.193 qpair failed and we were unable to recover it. 00:30:57.193 [2024-11-28 08:29:54.395475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.193 [2024-11-28 08:29:54.395504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.193 qpair failed and we were unable to recover it. 00:30:57.193 [2024-11-28 08:29:54.395842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.193 [2024-11-28 08:29:54.395870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.193 qpair failed and we were unable to recover it. 00:30:57.193 [2024-11-28 08:29:54.396128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.193 [2024-11-28 08:29:54.396156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.193 qpair failed and we were unable to recover it. 00:30:57.193 [2024-11-28 08:29:54.396532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.193 [2024-11-28 08:29:54.396560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.396929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.396959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.397223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.397256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.397638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.397669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.398019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.398048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.398434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.398464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.398840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.398869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.399233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.399263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.399484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.399512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.399868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.399897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.400271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.400301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.400651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.400688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.401062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.401090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.401438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.401468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.401825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.401853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.402193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.402223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.402462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.402492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.402850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.402878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.403262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.403293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.403654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.403693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.403906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.403934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.404287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.404317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.404655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.404684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.404907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.404935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.405364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.405394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.405739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.405768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.405994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.406024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.406427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.406457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.406845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.406873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.407064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.407103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.407356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.407386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.407742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.407771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.408005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.408034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.408374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.408404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.408649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.408677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.408887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.408916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.409299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.409330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.409645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.409673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.194 qpair failed and we were unable to recover it. 00:30:57.194 [2024-11-28 08:29:54.409897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.194 [2024-11-28 08:29:54.409927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.410201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.410235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.410603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.410632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.411015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.411044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.411390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.411422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.411686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.411716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.412099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.412127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.412367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.412397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.412650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.412680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.413008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.413036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.413399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.413430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.413787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.413817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.414174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.414203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.414558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.414587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.414820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.414849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.415233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.415263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.415472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.415500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.415871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.415901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.416112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.416141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.416515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.416544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.416905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.416935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.417320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.417350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.417693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.417722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.418100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.418129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.418367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.418400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.418770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.418800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.419075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.419103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.419503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.419533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.419805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.419834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.420048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.420077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.420433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.420462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.420715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.420750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.421029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.421058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.421469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.421500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.421842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.421871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.422297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.422327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.422605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.422634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.422983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.423011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.195 [2024-11-28 08:29:54.423225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.195 [2024-11-28 08:29:54.423257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.195 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.423619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.423648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.424002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.424030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.424269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.424299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.424673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.424702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.424914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.424943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.425323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.425354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.425726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.425757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.426010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.426039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.426391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.426422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.426787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.426815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.427050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.427078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.427427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.427458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.427726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.427758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.428018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.428047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.428286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.428316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.428757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.428787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.429174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.429204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.429544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.429572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.429969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.429998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.430303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.430334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.430739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.430768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.431134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.431172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.431533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.431561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.431790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.431818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.432110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.432140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.432400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.432429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.432793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.432822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.433055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.433083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.433446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.433476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.433726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.433756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.434111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.434139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.434503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.434534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.434906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.434941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.435316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.435347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.435582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.435611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.435863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.435895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.436235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.436266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.196 [2024-11-28 08:29:54.436516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.196 [2024-11-28 08:29:54.436544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.196 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.436933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.436962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.437280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.437310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.437543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.437571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.437817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.437846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.438071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.438100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.438477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.438507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.438733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.438762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.439066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.439095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.439351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.439382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.439751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.439780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.440124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.440154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.440395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.440424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.440673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.440703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.441042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.441071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.441433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.441463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.441860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.441889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.442258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.442288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.442699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.442727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.442936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.442965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.443333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.443364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.443741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.443769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.444155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.444194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.444575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.444605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.444857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.444884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.445179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.445212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.445605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.445635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.446001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.446029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.446328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.446358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.446575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.446604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.446819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.446847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.447237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.447267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.447491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.447520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.447878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.447907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.448122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.448150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.448556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.448591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.448797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.448825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.449203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.449232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.197 [2024-11-28 08:29:54.449528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.197 [2024-11-28 08:29:54.449556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.197 qpair failed and we were unable to recover it. 00:30:57.471 [2024-11-28 08:29:54.449796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.471 [2024-11-28 08:29:54.449828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.471 qpair failed and we were unable to recover it. 00:30:57.471 [2024-11-28 08:29:54.450207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.471 [2024-11-28 08:29:54.450238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.471 qpair failed and we were unable to recover it. 00:30:57.471 [2024-11-28 08:29:54.450609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.471 [2024-11-28 08:29:54.450639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.471 qpair failed and we were unable to recover it. 00:30:57.471 [2024-11-28 08:29:54.451009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.471 [2024-11-28 08:29:54.451038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.471 qpair failed and we were unable to recover it. 00:30:57.471 [2024-11-28 08:29:54.451270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.471 [2024-11-28 08:29:54.451300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.471 qpair failed and we were unable to recover it. 00:30:57.471 [2024-11-28 08:29:54.451547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.471 [2024-11-28 08:29:54.451575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.471 qpair failed and we were unable to recover it. 00:30:57.471 [2024-11-28 08:29:54.451819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.471 [2024-11-28 08:29:54.451848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.471 qpair failed and we were unable to recover it. 00:30:57.471 [2024-11-28 08:29:54.452251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.471 [2024-11-28 08:29:54.452280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.471 qpair failed and we were unable to recover it. 00:30:57.471 [2024-11-28 08:29:54.452637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.471 [2024-11-28 08:29:54.452665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.471 qpair failed and we were unable to recover it. 00:30:57.471 [2024-11-28 08:29:54.453061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.471 [2024-11-28 08:29:54.453090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.471 qpair failed and we were unable to recover it. 00:30:57.471 [2024-11-28 08:29:54.453449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.471 [2024-11-28 08:29:54.453478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.471 qpair failed and we were unable to recover it. 00:30:57.471 [2024-11-28 08:29:54.453850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.471 [2024-11-28 08:29:54.453880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.471 qpair failed and we were unable to recover it. 00:30:57.471 [2024-11-28 08:29:54.454148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.471 [2024-11-28 08:29:54.454202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.471 qpair failed and we were unable to recover it. 00:30:57.471 [2024-11-28 08:29:54.454439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.471 [2024-11-28 08:29:54.454468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.471 qpair failed and we were unable to recover it. 00:30:57.471 [2024-11-28 08:29:54.454929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.471 [2024-11-28 08:29:54.454958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.471 qpair failed and we were unable to recover it. 00:30:57.471 [2024-11-28 08:29:54.455296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.471 [2024-11-28 08:29:54.455326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.471 qpair failed and we were unable to recover it. 00:30:57.471 [2024-11-28 08:29:54.455697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.471 [2024-11-28 08:29:54.455726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.471 qpair failed and we were unable to recover it. 00:30:57.471 [2024-11-28 08:29:54.456071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.471 [2024-11-28 08:29:54.456099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.471 qpair failed and we were unable to recover it. 00:30:57.471 [2024-11-28 08:29:54.456485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.471 [2024-11-28 08:29:54.456514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.471 qpair failed and we were unable to recover it. 00:30:57.471 [2024-11-28 08:29:54.456627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.471 [2024-11-28 08:29:54.456658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.471 qpair failed and we were unable to recover it. 00:30:57.471 [2024-11-28 08:29:54.457027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.471 [2024-11-28 08:29:54.457056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.471 qpair failed and we were unable to recover it. 00:30:57.471 [2024-11-28 08:29:54.457286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.471 [2024-11-28 08:29:54.457315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.471 qpair failed and we were unable to recover it. 00:30:57.471 [2024-11-28 08:29:54.457572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.471 [2024-11-28 08:29:54.457601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.471 qpair failed and we were unable to recover it. 00:30:57.471 [2024-11-28 08:29:54.457856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.471 [2024-11-28 08:29:54.457885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.471 qpair failed and we were unable to recover it. 00:30:57.471 [2024-11-28 08:29:54.458254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.471 [2024-11-28 08:29:54.458283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.471 qpair failed and we were unable to recover it. 00:30:57.471 [2024-11-28 08:29:54.458657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.471 [2024-11-28 08:29:54.458686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.471 qpair failed and we were unable to recover it. 00:30:57.471 [2024-11-28 08:29:54.458898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.471 [2024-11-28 08:29:54.458927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.471 qpair failed and we were unable to recover it. 00:30:57.471 [2024-11-28 08:29:54.459277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.472 [2024-11-28 08:29:54.459307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.472 qpair failed and we were unable to recover it. 00:30:57.472 [2024-11-28 08:29:54.459696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.472 [2024-11-28 08:29:54.459726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.472 qpair failed and we were unable to recover it. 00:30:57.472 [2024-11-28 08:29:54.459892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.472 [2024-11-28 08:29:54.459921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.472 qpair failed and we were unable to recover it. 00:30:57.472 [2024-11-28 08:29:54.460264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.472 [2024-11-28 08:29:54.460293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.472 qpair failed and we were unable to recover it. 00:30:57.472 [2024-11-28 08:29:54.460562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.472 [2024-11-28 08:29:54.460593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.472 qpair failed and we were unable to recover it. 00:30:57.472 [2024-11-28 08:29:54.460729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.472 [2024-11-28 08:29:54.460758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.472 qpair failed and we were unable to recover it. 00:30:57.472 [2024-11-28 08:29:54.461123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.472 [2024-11-28 08:29:54.461151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.472 qpair failed and we were unable to recover it. 00:30:57.472 [2024-11-28 08:29:54.461381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.472 [2024-11-28 08:29:54.461410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.472 qpair failed and we were unable to recover it. 00:30:57.472 [2024-11-28 08:29:54.461797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.472 [2024-11-28 08:29:54.461826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.472 qpair failed and we were unable to recover it. 00:30:57.472 [2024-11-28 08:29:54.462069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.472 [2024-11-28 08:29:54.462103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.472 qpair failed and we were unable to recover it. 00:30:57.472 [2024-11-28 08:29:54.462568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.472 [2024-11-28 08:29:54.462599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.472 qpair failed and we were unable to recover it. 00:30:57.472 [2024-11-28 08:29:54.462961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.472 [2024-11-28 08:29:54.462988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.472 qpair failed and we were unable to recover it. 00:30:57.472 [2024-11-28 08:29:54.463344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.472 [2024-11-28 08:29:54.463374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.472 qpair failed and we were unable to recover it. 00:30:57.472 [2024-11-28 08:29:54.463753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.472 [2024-11-28 08:29:54.463782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.472 qpair failed and we were unable to recover it. 00:30:57.472 [2024-11-28 08:29:54.464148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.472 [2024-11-28 08:29:54.464184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.472 qpair failed and we were unable to recover it. 00:30:57.472 [2024-11-28 08:29:54.464441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.472 [2024-11-28 08:29:54.464473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.472 qpair failed and we were unable to recover it. 00:30:57.472 [2024-11-28 08:29:54.464858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.472 [2024-11-28 08:29:54.464888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.472 qpair failed and we were unable to recover it. 00:30:57.472 [2024-11-28 08:29:54.465251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.472 [2024-11-28 08:29:54.465281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.472 qpair failed and we were unable to recover it. 00:30:57.472 [2024-11-28 08:29:54.465506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.472 [2024-11-28 08:29:54.465534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.472 qpair failed and we were unable to recover it. 00:30:57.472 [2024-11-28 08:29:54.465904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.472 [2024-11-28 08:29:54.465933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.472 qpair failed and we were unable to recover it. 00:30:57.472 [2024-11-28 08:29:54.466274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.472 [2024-11-28 08:29:54.466304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.472 qpair failed and we were unable to recover it. 00:30:57.472 [2024-11-28 08:29:54.466516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.472 [2024-11-28 08:29:54.466544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.472 qpair failed and we were unable to recover it. 00:30:57.472 [2024-11-28 08:29:54.466911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.472 [2024-11-28 08:29:54.466939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.472 qpair failed and we were unable to recover it. 00:30:57.472 [2024-11-28 08:29:54.467208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.472 [2024-11-28 08:29:54.467237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.472 qpair failed and we were unable to recover it. 00:30:57.472 [2024-11-28 08:29:54.467484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.472 [2024-11-28 08:29:54.467513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.472 qpair failed and we were unable to recover it. 00:30:57.472 [2024-11-28 08:29:54.467884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.472 [2024-11-28 08:29:54.467913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.472 qpair failed and we were unable to recover it. 00:30:57.472 [2024-11-28 08:29:54.468118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.472 [2024-11-28 08:29:54.468147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.472 qpair failed and we were unable to recover it. 00:30:57.472 [2024-11-28 08:29:54.468535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.472 [2024-11-28 08:29:54.468565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.472 qpair failed and we were unable to recover it. 00:30:57.472 [2024-11-28 08:29:54.468788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.472 [2024-11-28 08:29:54.468816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.472 qpair failed and we were unable to recover it. 00:30:57.472 [2024-11-28 08:29:54.469045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.472 [2024-11-28 08:29:54.469076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.472 qpair failed and we were unable to recover it. 00:30:57.472 [2024-11-28 08:29:54.469326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.472 [2024-11-28 08:29:54.469357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.472 qpair failed and we were unable to recover it. 00:30:57.472 [2024-11-28 08:29:54.469706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.472 [2024-11-28 08:29:54.469734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.472 qpair failed and we were unable to recover it. 00:30:57.472 [2024-11-28 08:29:54.470089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.472 [2024-11-28 08:29:54.470118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.470370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.470400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.470774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.470803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.471168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.471198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.471572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.471602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.471974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.472004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.472356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.472387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.472743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.472772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.473124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.473153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.473578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.473607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.473958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.473987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.474346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.474376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.474587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.474615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.474979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.475008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.475246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.475276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.475710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.475738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.476096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.476125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.476476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.476519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.476724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.476753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.476996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.477025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.477226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.477257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.477508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.477537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.477910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.477939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.478304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.478334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.478693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.478722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.479018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.479046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.479413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.479442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.479805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.479834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.480180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.480211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.480606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.480635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.480970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.481000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.481334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.481365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.481712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.481741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.482090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.482119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.482511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.482541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.482854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.482884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.483255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.483284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.483589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.473 [2024-11-28 08:29:54.483627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.473 qpair failed and we were unable to recover it. 00:30:57.473 [2024-11-28 08:29:54.483991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.484021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.484377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.484407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.484646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.484674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.485049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.485079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.485293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.485324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.485689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.485718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.486081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.486111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.486323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.486353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.486470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.486498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.486789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.486821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.487166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.487197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.487538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.487567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.487803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.487831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.488234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.488264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.488594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.488623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.488867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.488894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.489238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.489268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.489645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.489674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.489906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.489937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.490190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.490229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.490632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.490661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.490872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.490900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.491270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.491302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.491542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.491570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.491922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.491951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.492317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.492348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.492634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.492662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.493032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.493061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.493408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.493439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.493792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.493820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.494223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.494253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.494604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.494634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.495018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.495046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.495428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.495459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.495817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.495847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.496185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.496215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.496455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.496483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.474 [2024-11-28 08:29:54.496834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.474 [2024-11-28 08:29:54.496862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.474 qpair failed and we were unable to recover it. 00:30:57.475 [2024-11-28 08:29:54.497104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.475 [2024-11-28 08:29:54.497132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.475 qpair failed and we were unable to recover it. 00:30:57.475 [2024-11-28 08:29:54.497496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.475 [2024-11-28 08:29:54.497526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.475 qpair failed and we were unable to recover it. 00:30:57.475 [2024-11-28 08:29:54.497862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.475 [2024-11-28 08:29:54.497890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.475 qpair failed and we were unable to recover it. 00:30:57.475 [2024-11-28 08:29:54.498246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.475 [2024-11-28 08:29:54.498277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.475 qpair failed and we were unable to recover it. 00:30:57.475 [2024-11-28 08:29:54.498634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.475 [2024-11-28 08:29:54.498663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.475 qpair failed and we were unable to recover it. 00:30:57.475 [2024-11-28 08:29:54.499038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.475 [2024-11-28 08:29:54.499067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.475 qpair failed and we were unable to recover it. 00:30:57.475 [2024-11-28 08:29:54.499424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.475 [2024-11-28 08:29:54.499454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.475 qpair failed and we were unable to recover it. 00:30:57.475 [2024-11-28 08:29:54.499822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.475 [2024-11-28 08:29:54.499851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.475 qpair failed and we were unable to recover it. 00:30:57.475 [2024-11-28 08:29:54.500209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.475 [2024-11-28 08:29:54.500240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.475 qpair failed and we were unable to recover it. 00:30:57.475 [2024-11-28 08:29:54.500333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.475 [2024-11-28 08:29:54.500361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.475 qpair failed and we were unable to recover it. 00:30:57.475 [2024-11-28 08:29:54.500728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.475 [2024-11-28 08:29:54.500757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.475 qpair failed and we were unable to recover it. 00:30:57.475 [2024-11-28 08:29:54.501123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.475 [2024-11-28 08:29:54.501152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.475 qpair failed and we were unable to recover it. 00:30:57.475 [2024-11-28 08:29:54.501471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.475 [2024-11-28 08:29:54.501502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.475 qpair failed and we were unable to recover it. 00:30:57.475 [2024-11-28 08:29:54.501712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.475 [2024-11-28 08:29:54.501741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.475 qpair failed and we were unable to recover it. 00:30:57.475 [2024-11-28 08:29:54.502083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.475 [2024-11-28 08:29:54.502112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.475 qpair failed and we were unable to recover it. 00:30:57.475 [2024-11-28 08:29:54.502348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.475 [2024-11-28 08:29:54.502379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.475 qpair failed and we were unable to recover it. 00:30:57.475 [2024-11-28 08:29:54.502753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.475 [2024-11-28 08:29:54.502783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.475 qpair failed and we were unable to recover it. 00:30:57.475 [2024-11-28 08:29:54.503024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.475 [2024-11-28 08:29:54.503053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.475 qpair failed and we were unable to recover it. 00:30:57.475 [2024-11-28 08:29:54.503408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.475 [2024-11-28 08:29:54.503439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.475 qpair failed and we were unable to recover it. 00:30:57.475 [2024-11-28 08:29:54.503783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.475 [2024-11-28 08:29:54.503811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.475 qpair failed and we were unable to recover it. 00:30:57.475 [2024-11-28 08:29:54.504175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.475 [2024-11-28 08:29:54.504204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.475 qpair failed and we were unable to recover it. 00:30:57.475 [2024-11-28 08:29:54.504561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.475 [2024-11-28 08:29:54.504596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.475 qpair failed and we were unable to recover it. 00:30:57.475 [2024-11-28 08:29:54.504955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.475 [2024-11-28 08:29:54.504984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.475 qpair failed and we were unable to recover it. 00:30:57.475 [2024-11-28 08:29:54.505331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.475 [2024-11-28 08:29:54.505360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.475 qpair failed and we were unable to recover it. 00:30:57.475 [2024-11-28 08:29:54.505740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.475 [2024-11-28 08:29:54.505769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.475 qpair failed and we were unable to recover it. 00:30:57.475 [2024-11-28 08:29:54.506116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.475 [2024-11-28 08:29:54.506145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.475 qpair failed and we were unable to recover it. 00:30:57.475 [2024-11-28 08:29:54.506519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.475 [2024-11-28 08:29:54.506548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.475 qpair failed and we were unable to recover it. 00:30:57.475 [2024-11-28 08:29:54.506911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.475 [2024-11-28 08:29:54.506939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.475 qpair failed and we were unable to recover it. 00:30:57.475 [2024-11-28 08:29:54.507298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.475 [2024-11-28 08:29:54.507328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.475 qpair failed and we were unable to recover it. 00:30:57.475 [2024-11-28 08:29:54.507661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.475 [2024-11-28 08:29:54.507690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.475 qpair failed and we were unable to recover it. 00:30:57.475 [2024-11-28 08:29:54.508066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.475 [2024-11-28 08:29:54.508095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.475 qpair failed and we were unable to recover it. 00:30:57.475 [2024-11-28 08:29:54.508454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.475 [2024-11-28 08:29:54.508484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.475 qpair failed and we were unable to recover it. 00:30:57.475 [2024-11-28 08:29:54.508846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.475 [2024-11-28 08:29:54.508875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.475 qpair failed and we were unable to recover it. 00:30:57.475 [2024-11-28 08:29:54.509223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.475 [2024-11-28 08:29:54.509253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.475 qpair failed and we were unable to recover it. 00:30:57.476 [2024-11-28 08:29:54.509630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.476 [2024-11-28 08:29:54.509657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.476 qpair failed and we were unable to recover it. 00:30:57.476 [2024-11-28 08:29:54.509899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.476 [2024-11-28 08:29:54.509929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.476 qpair failed and we were unable to recover it. 00:30:57.476 [2024-11-28 08:29:54.510270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.476 [2024-11-28 08:29:54.510299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.476 qpair failed and we were unable to recover it. 00:30:57.476 [2024-11-28 08:29:54.510657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.476 [2024-11-28 08:29:54.510685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.476 qpair failed and we were unable to recover it. 00:30:57.476 [2024-11-28 08:29:54.510924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.476 [2024-11-28 08:29:54.510952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.476 qpair failed and we were unable to recover it. 00:30:57.476 [2024-11-28 08:29:54.511330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.476 [2024-11-28 08:29:54.511361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.476 qpair failed and we were unable to recover it. 00:30:57.476 [2024-11-28 08:29:54.511701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.476 [2024-11-28 08:29:54.511730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.476 qpair failed and we were unable to recover it. 00:30:57.476 [2024-11-28 08:29:54.512097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.476 [2024-11-28 08:29:54.512125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.476 qpair failed and we were unable to recover it. 00:30:57.476 [2024-11-28 08:29:54.512505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.476 [2024-11-28 08:29:54.512536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.476 qpair failed and we were unable to recover it. 00:30:57.476 [2024-11-28 08:29:54.512759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.476 [2024-11-28 08:29:54.512788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.476 qpair failed and we were unable to recover it. 00:30:57.476 [2024-11-28 08:29:54.513141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.476 [2024-11-28 08:29:54.513179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.476 qpair failed and we were unable to recover it. 00:30:57.476 [2024-11-28 08:29:54.513428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.476 [2024-11-28 08:29:54.513458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.476 qpair failed and we were unable to recover it. 00:30:57.476 [2024-11-28 08:29:54.513699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.476 [2024-11-28 08:29:54.513727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.476 qpair failed and we were unable to recover it. 00:30:57.476 [2024-11-28 08:29:54.514084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.476 [2024-11-28 08:29:54.514114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.476 qpair failed and we were unable to recover it. 00:30:57.476 [2024-11-28 08:29:54.514509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.476 [2024-11-28 08:29:54.514541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.476 qpair failed and we were unable to recover it. 00:30:57.476 [2024-11-28 08:29:54.514770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.476 [2024-11-28 08:29:54.514799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.476 qpair failed and we were unable to recover it. 00:30:57.476 [2024-11-28 08:29:54.515147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.476 [2024-11-28 08:29:54.515191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.476 qpair failed and we were unable to recover it. 00:30:57.476 [2024-11-28 08:29:54.515549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.476 [2024-11-28 08:29:54.515579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.476 qpair failed and we were unable to recover it. 00:30:57.476 [2024-11-28 08:29:54.515947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.476 [2024-11-28 08:29:54.515977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.476 qpair failed and we were unable to recover it. 00:30:57.476 [2024-11-28 08:29:54.516344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.476 [2024-11-28 08:29:54.516374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.476 qpair failed and we were unable to recover it. 00:30:57.476 [2024-11-28 08:29:54.516741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.476 [2024-11-28 08:29:54.516769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.476 qpair failed and we were unable to recover it. 00:30:57.476 [2024-11-28 08:29:54.517134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.476 [2024-11-28 08:29:54.517172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.476 qpair failed and we were unable to recover it. 00:30:57.476 [2024-11-28 08:29:54.517508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.476 [2024-11-28 08:29:54.517539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.476 qpair failed and we were unable to recover it. 00:30:57.476 [2024-11-28 08:29:54.517890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.476 [2024-11-28 08:29:54.517919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.476 qpair failed and we were unable to recover it. 00:30:57.476 [2024-11-28 08:29:54.518275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.476 [2024-11-28 08:29:54.518305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.476 qpair failed and we were unable to recover it. 00:30:57.476 [2024-11-28 08:29:54.518524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.476 [2024-11-28 08:29:54.518553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.476 qpair failed and we were unable to recover it. 00:30:57.476 [2024-11-28 08:29:54.518900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.476 [2024-11-28 08:29:54.518929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.476 qpair failed and we were unable to recover it. 00:30:57.476 [2024-11-28 08:29:54.519285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.476 [2024-11-28 08:29:54.519321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.476 qpair failed and we were unable to recover it. 00:30:57.476 [2024-11-28 08:29:54.519686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.476 [2024-11-28 08:29:54.519715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.476 qpair failed and we were unable to recover it. 00:30:57.476 [2024-11-28 08:29:54.520065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.476 [2024-11-28 08:29:54.520094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.476 qpair failed and we were unable to recover it. 00:30:57.476 [2024-11-28 08:29:54.520428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.476 [2024-11-28 08:29:54.520459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.476 qpair failed and we were unable to recover it. 00:30:57.476 [2024-11-28 08:29:54.520824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.476 [2024-11-28 08:29:54.520853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.476 qpair failed and we were unable to recover it. 00:30:57.476 [2024-11-28 08:29:54.521068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.476 [2024-11-28 08:29:54.521096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.476 qpair failed and we were unable to recover it. 00:30:57.476 [2024-11-28 08:29:54.521329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.476 [2024-11-28 08:29:54.521358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.476 qpair failed and we were unable to recover it. 00:30:57.476 [2024-11-28 08:29:54.521605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.476 [2024-11-28 08:29:54.521634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.476 qpair failed and we were unable to recover it. 00:30:57.476 [2024-11-28 08:29:54.521992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.522020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.522393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.522425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.522757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.522786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.523139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.523176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.523402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.523430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.523788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.523817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.524178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.524208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.524554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.524582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.524947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.524977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.525317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.525348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.525714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.525743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.525978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.526006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.526248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.526278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.526645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.526676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.527036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.527066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.527417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.527447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.527808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.527837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.528099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.528128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.528345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.528375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.528496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.528532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.528749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.528778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.529018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.529048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.529412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.529443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.529809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.529838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.530191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.530223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.530565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.530594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.530959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.530988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.531221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.531251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.531497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.531525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.531738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.531767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.531880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.531913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.532228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.532259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.532659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.532688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.533052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.533084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.533413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.533447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.533775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.533806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.534128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.534168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.477 qpair failed and we were unable to recover it. 00:30:57.477 [2024-11-28 08:29:54.534441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.477 [2024-11-28 08:29:54.534472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.534803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.534833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.535095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.535125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.535481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.535511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.535888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.535917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.536176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.536207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.536563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.536592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.536949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.536978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.537343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.537373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.537724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.537757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.538114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.538144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.538402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.538432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.538644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.538672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.538905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.538934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.539257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.539288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.539657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.539686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.539985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.540013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.540236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.540267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.540480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.540508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.540664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.540692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.541042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.541071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.541418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.541449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.541807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.541844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.542044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.542073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.542437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.542467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.542822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.542852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.543208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.543238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.543617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.543648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.544007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.544036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.544304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.544333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.544683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.544712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.545079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.545107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.545462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.545493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.545858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.545887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.546232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.546263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.546619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.546647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.547009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.547039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.547395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.547425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.478 qpair failed and we were unable to recover it. 00:30:57.478 [2024-11-28 08:29:54.547757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.478 [2024-11-28 08:29:54.547786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.548151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.548189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.548528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.548556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.548922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.548950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.549176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.549206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.549524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.549553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.549907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.549936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.550117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.550146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.550402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.550434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.550667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.550699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.550932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.550961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.551330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.551362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.551684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.551714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.552067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.552097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.552467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.552497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.552866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.552894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.553102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.553129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.553371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.553401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.553753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.553781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.553972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.554001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.554382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.554412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.554509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.554537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.554981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.555076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.555499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.555596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.555975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.556035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.556441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.556537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.556974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.557011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.557378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.557410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.557776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.557806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.558145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.558185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.558606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.558634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.558976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.559006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.559362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.559392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.559729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.559758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.559998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.560026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.560249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.560281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.560679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.560707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.561045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.479 [2024-11-28 08:29:54.561075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.479 qpair failed and we were unable to recover it. 00:30:57.479 [2024-11-28 08:29:54.561460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.480 [2024-11-28 08:29:54.561491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.480 qpair failed and we were unable to recover it. 00:30:57.480 [2024-11-28 08:29:54.561743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.480 [2024-11-28 08:29:54.561770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.480 qpair failed and we were unable to recover it. 00:30:57.480 [2024-11-28 08:29:54.562117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.480 [2024-11-28 08:29:54.562146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.480 qpair failed and we were unable to recover it. 00:30:57.480 [2024-11-28 08:29:54.562477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.480 [2024-11-28 08:29:54.562508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.480 qpair failed and we were unable to recover it. 00:30:57.480 [2024-11-28 08:29:54.562741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.480 [2024-11-28 08:29:54.562770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.480 qpair failed and we were unable to recover it. 00:30:57.480 [2024-11-28 08:29:54.563090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.480 [2024-11-28 08:29:54.563120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.480 qpair failed and we were unable to recover it. 00:30:57.480 [2024-11-28 08:29:54.563508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.480 [2024-11-28 08:29:54.563539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.480 qpair failed and we were unable to recover it. 00:30:57.480 [2024-11-28 08:29:54.563913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.480 [2024-11-28 08:29:54.563941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.480 qpair failed and we were unable to recover it. 00:30:57.480 [2024-11-28 08:29:54.564179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.480 [2024-11-28 08:29:54.564209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.480 qpair failed and we were unable to recover it. 00:30:57.480 [2024-11-28 08:29:54.564573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.480 [2024-11-28 08:29:54.564602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.480 qpair failed and we were unable to recover it. 00:30:57.480 [2024-11-28 08:29:54.564960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.480 [2024-11-28 08:29:54.564991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.480 qpair failed and we were unable to recover it. 00:30:57.480 [2024-11-28 08:29:54.565324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.480 [2024-11-28 08:29:54.565355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.480 qpair failed and we were unable to recover it. 00:30:57.480 [2024-11-28 08:29:54.565505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.480 [2024-11-28 08:29:54.565539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.480 qpair failed and we were unable to recover it. 00:30:57.480 [2024-11-28 08:29:54.565799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.480 [2024-11-28 08:29:54.565828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.480 qpair failed and we were unable to recover it. 00:30:57.480 [2024-11-28 08:29:54.566036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.480 [2024-11-28 08:29:54.566066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.480 qpair failed and we were unable to recover it. 00:30:57.480 [2024-11-28 08:29:54.566341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.480 [2024-11-28 08:29:54.566372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.480 qpair failed and we were unable to recover it. 00:30:57.480 [2024-11-28 08:29:54.566736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.480 [2024-11-28 08:29:54.566765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.480 qpair failed and we were unable to recover it. 00:30:57.480 [2024-11-28 08:29:54.567114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.480 [2024-11-28 08:29:54.567143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.480 qpair failed and we were unable to recover it. 00:30:57.480 [2024-11-28 08:29:54.567499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.480 [2024-11-28 08:29:54.567529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.480 qpair failed and we were unable to recover it. 00:30:57.480 [2024-11-28 08:29:54.567876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.480 [2024-11-28 08:29:54.567905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.480 qpair failed and we were unable to recover it. 00:30:57.480 [2024-11-28 08:29:54.568146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.480 [2024-11-28 08:29:54.568185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.480 qpair failed and we were unable to recover it. 00:30:57.480 [2024-11-28 08:29:54.568518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.480 [2024-11-28 08:29:54.568547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.480 qpair failed and we were unable to recover it. 00:30:57.480 [2024-11-28 08:29:54.568757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.480 [2024-11-28 08:29:54.568785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.480 qpair failed and we were unable to recover it. 00:30:57.480 [2024-11-28 08:29:54.569134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.480 [2024-11-28 08:29:54.569175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.480 qpair failed and we were unable to recover it. 00:30:57.480 [2024-11-28 08:29:54.569543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.480 [2024-11-28 08:29:54.569572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.480 qpair failed and we were unable to recover it. 00:30:57.480 [2024-11-28 08:29:54.569932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.480 [2024-11-28 08:29:54.569961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.480 qpair failed and we were unable to recover it. 00:30:57.480 [2024-11-28 08:29:54.570192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.480 [2024-11-28 08:29:54.570228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.480 qpair failed and we were unable to recover it. 00:30:57.480 [2024-11-28 08:29:54.570483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.480 [2024-11-28 08:29:54.570516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.480 qpair failed and we were unable to recover it. 00:30:57.480 [2024-11-28 08:29:54.570775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.480 [2024-11-28 08:29:54.570808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.480 qpair failed and we were unable to recover it. 00:30:57.480 [2024-11-28 08:29:54.571022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.480 [2024-11-28 08:29:54.571051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.480 qpair failed and we were unable to recover it. 00:30:57.480 [2024-11-28 08:29:54.571426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.480 [2024-11-28 08:29:54.571457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.480 qpair failed and we were unable to recover it. 00:30:57.480 [2024-11-28 08:29:54.571808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.480 [2024-11-28 08:29:54.571837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.480 qpair failed and we were unable to recover it. 00:30:57.480 [2024-11-28 08:29:54.572049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.480 [2024-11-28 08:29:54.572080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.480 qpair failed and we were unable to recover it. 00:30:57.480 [2024-11-28 08:29:54.572421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.572452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.572796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.572824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.573175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.573204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.573566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.573595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.573926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.573955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.574304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.574335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.574707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.574737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.575079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.575108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.575496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.575527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.575728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.575757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.576106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.576134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.576495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.576525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.576882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.576911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.577272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.577301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.577644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.577674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.577896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.577925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.578272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.578304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.578660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.578689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.579047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.579076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.579302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.579332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.579687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.579716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.580059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.580087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.580462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.580493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.580841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.580869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.581117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.581145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.581468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.581498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.581861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.581889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.582235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.582266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.582644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.582673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.583027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.583056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.583280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.583310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.583551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.583580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.583941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.583969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.584175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.584212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.584561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.584590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.584964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.584992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.585201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.585231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.585477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.481 [2024-11-28 08:29:54.585506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.481 qpair failed and we were unable to recover it. 00:30:57.481 [2024-11-28 08:29:54.585598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.585626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.585864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.585892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.586258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.586289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.586643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.586672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.586879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.586907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.587263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.587292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.587530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.587558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.587753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.587782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.587995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.588023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.588405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.588435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.588674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.588702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.588926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.588955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.589319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.589349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.589555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.589584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.589819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.589848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.590185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.590216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.590573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.590602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.590941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.590970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.591319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.591350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.591698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.591728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.592062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.592091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.592456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.592487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.592731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.592759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.593106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.593135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.593503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.593534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.593883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.593911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.594183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.594213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.594581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.594610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.594956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.594985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.595337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.595366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.595566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.595594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.595968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.595997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.596382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.596413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.596739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.596768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.597107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.597136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.597525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.597560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.597881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.597911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.482 qpair failed and we were unable to recover it. 00:30:57.482 [2024-11-28 08:29:54.598248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.482 [2024-11-28 08:29:54.598278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.598692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.598720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.599077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.599105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.599473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.599502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.599739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.599767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.600124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.600152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.600516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.600545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.600875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.600904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.601140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.601176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.601595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.601623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.601992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.602020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.602381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.602411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.602661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.602688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.603041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.603070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.603438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.603468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.603824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.603854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.604197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.604227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.604552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.604581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.604833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.604862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.605229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.605258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.605626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.605655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.605978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.606006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.606354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.606385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.606748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.606776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.607008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.607036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.607368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.607398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.607745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.607773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.608140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.608175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.608513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.608542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.608890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.608919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.609211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.609240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.609568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.609596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.609948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.609977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.610224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.610254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.610635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.610664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.610957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.610986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.611330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.611360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.611723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.611751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.483 [2024-11-28 08:29:54.612109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.483 [2024-11-28 08:29:54.612144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.483 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.612480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.612510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.612750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.612778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.613131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.613166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.613516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.613544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.613899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.613927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.614283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.614313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.614651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.614680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.615053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.615082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.615313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.615342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.615698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.615727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.616089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.616118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.616342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.616374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.616732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.616761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.617185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.617216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.617628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.617658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.617905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.617935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.618277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.618308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.618658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.618686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.619077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.619106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.619476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.619507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.619849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.619877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.620240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.620271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.620609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.620639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.620859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.620887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.621087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.621116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.621485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.621514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.621841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.621870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.622230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.622260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.622500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.622529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.622903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.622931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.623313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.623343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.623706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.623736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.623953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.623982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.624190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.624219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.624499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.624527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.624866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.624895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.484 [2024-11-28 08:29:54.625253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.484 [2024-11-28 08:29:54.625283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.484 qpair failed and we were unable to recover it. 00:30:57.485 [2024-11-28 08:29:54.625495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-11-28 08:29:54.625527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.485 qpair failed and we were unable to recover it. 00:30:57.485 [2024-11-28 08:29:54.625866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-11-28 08:29:54.625894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.485 qpair failed and we were unable to recover it. 00:30:57.485 [2024-11-28 08:29:54.626238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-11-28 08:29:54.626282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.485 qpair failed and we were unable to recover it. 00:30:57.485 [2024-11-28 08:29:54.626628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-11-28 08:29:54.626658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.485 qpair failed and we were unable to recover it. 00:30:57.485 [2024-11-28 08:29:54.627018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-11-28 08:29:54.627047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.485 qpair failed and we were unable to recover it. 00:30:57.485 [2024-11-28 08:29:54.627398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-11-28 08:29:54.627429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.485 qpair failed and we were unable to recover it. 00:30:57.485 [2024-11-28 08:29:54.627777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-11-28 08:29:54.627805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.485 qpair failed and we were unable to recover it. 00:30:57.485 [2024-11-28 08:29:54.628019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-11-28 08:29:54.628048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.485 qpair failed and we were unable to recover it. 00:30:57.485 [2024-11-28 08:29:54.628382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-11-28 08:29:54.628415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.485 qpair failed and we were unable to recover it. 00:30:57.485 [2024-11-28 08:29:54.628747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-11-28 08:29:54.628776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.485 qpair failed and we were unable to recover it. 00:30:57.485 [2024-11-28 08:29:54.629035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-11-28 08:29:54.629064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.485 qpair failed and we were unable to recover it. 00:30:57.485 [2024-11-28 08:29:54.629278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-11-28 08:29:54.629307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.485 qpair failed and we were unable to recover it. 00:30:57.485 [2024-11-28 08:29:54.629683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-11-28 08:29:54.629712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.485 qpair failed and we were unable to recover it. 00:30:57.485 [2024-11-28 08:29:54.630051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-11-28 08:29:54.630079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.485 qpair failed and we were unable to recover it. 00:30:57.485 [2024-11-28 08:29:54.630422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-11-28 08:29:54.630452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.485 qpair failed and we were unable to recover it. 00:30:57.485 [2024-11-28 08:29:54.630756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-11-28 08:29:54.630785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.485 qpair failed and we were unable to recover it. 00:30:57.485 [2024-11-28 08:29:54.631134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-11-28 08:29:54.631171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.485 qpair failed and we were unable to recover it. 00:30:57.485 [2024-11-28 08:29:54.631372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-11-28 08:29:54.631401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.485 qpair failed and we were unable to recover it. 00:30:57.485 [2024-11-28 08:29:54.631746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-11-28 08:29:54.631774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.485 qpair failed and we were unable to recover it. 00:30:57.485 [2024-11-28 08:29:54.632129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-11-28 08:29:54.632164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.485 qpair failed and we were unable to recover it. 00:30:57.485 [2024-11-28 08:29:54.632505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-11-28 08:29:54.632534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.485 qpair failed and we were unable to recover it. 00:30:57.485 [2024-11-28 08:29:54.632884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-11-28 08:29:54.632913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.485 qpair failed and we were unable to recover it. 00:30:57.485 [2024-11-28 08:29:54.633265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-11-28 08:29:54.633295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.485 qpair failed and we were unable to recover it. 00:30:57.485 [2024-11-28 08:29:54.633619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-11-28 08:29:54.633647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.485 qpair failed and we were unable to recover it. 00:30:57.485 [2024-11-28 08:29:54.634000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-11-28 08:29:54.634028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.485 qpair failed and we were unable to recover it. 00:30:57.485 [2024-11-28 08:29:54.634237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-11-28 08:29:54.634267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.485 qpair failed and we were unable to recover it. 00:30:57.485 [2024-11-28 08:29:54.634616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-11-28 08:29:54.634645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.485 qpair failed and we were unable to recover it. 00:30:57.485 [2024-11-28 08:29:54.634867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-11-28 08:29:54.634895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.485 qpair failed and we were unable to recover it. 00:30:57.485 [2024-11-28 08:29:54.635236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-11-28 08:29:54.635267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.485 qpair failed and we were unable to recover it. 00:30:57.485 [2024-11-28 08:29:54.635646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-11-28 08:29:54.635675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.485 qpair failed and we were unable to recover it. 00:30:57.485 [2024-11-28 08:29:54.636021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-11-28 08:29:54.636050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.485 qpair failed and we were unable to recover it. 00:30:57.485 [2024-11-28 08:29:54.636411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-11-28 08:29:54.636441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.485 qpair failed and we were unable to recover it. 00:30:57.485 [2024-11-28 08:29:54.636573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.485 [2024-11-28 08:29:54.636601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.485 qpair failed and we were unable to recover it. 00:30:57.485 [2024-11-28 08:29:54.636822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.636851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.637185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.637216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.637672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.637701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.637912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.637940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.638313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.638345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.638709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.638737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.639002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.639030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.639379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.639409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.639762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.639790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.640184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.640220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.640556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.640585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.640949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.640978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.641339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.641369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.641801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.641830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.642132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.642171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.642403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.642432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.642789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.642817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.643062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.643090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.643451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.643485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.643692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.643721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.643923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.643951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.644343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.644373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.644605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.644634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.644917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.644946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.645167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.645196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.645566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.645595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.645930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.645959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.646362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.646392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.646619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.646648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.646783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.646812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.647211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.647242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.647597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.647626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.647861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.647889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.648208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.648238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.648558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.648587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.648938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.648967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.649332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.649366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.649719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.486 [2024-11-28 08:29:54.649747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.486 qpair failed and we were unable to recover it. 00:30:57.486 [2024-11-28 08:29:54.649961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.649989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.650128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.650155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.650513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.650542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.650813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.650841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.651192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.651224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.651438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.651467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.651860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.651888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.652243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.652273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.652626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.652655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.652865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.652894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.653136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.653172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.653492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.653528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.653885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.653915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.654235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.654264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.654600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.654629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.654979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.655008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.655248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.655277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.655590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.655619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.655959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.655989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.656214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.656243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.656589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.656618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.656935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.656964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.657181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.657213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.657426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.657454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.657816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.657844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.658118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.658147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.658393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.658424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.658770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.658799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.659026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.659054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.659414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.659445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.659662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.659694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.659920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.659950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.660091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.660119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.660563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.660594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.660960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.660990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.661221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.661251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.661600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.661629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.661885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.487 [2024-11-28 08:29:54.661913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.487 qpair failed and we were unable to recover it. 00:30:57.487 [2024-11-28 08:29:54.662247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.662278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.662635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.662664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.662909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.662937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.663178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.663208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.663563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.663591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.663945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.663974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.664067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.664095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c4000b90 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.664635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.664735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.665190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.665229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.665670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.665766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.666074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.666111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.666607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.666706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.667157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.667282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.667489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.667519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.667869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.667899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.668155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.668197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.668458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.668487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.668756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.668792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.668921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.668951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.669226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.669257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.669518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.669547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.669780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.669809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.670056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.670084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.670469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.670501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.670810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.670845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.671078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.671107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.671265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.671296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.671541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.671576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.671935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.671964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.672105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.672134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.672506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.672536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.672774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.672802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.673156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.673196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.673418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.673447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.673659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.673688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.673905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.673933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.674144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.674183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.674544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.488 [2024-11-28 08:29:54.674572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.488 qpair failed and we were unable to recover it. 00:30:57.488 [2024-11-28 08:29:54.674927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.489 [2024-11-28 08:29:54.674955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.489 qpair failed and we were unable to recover it. 00:30:57.489 [2024-11-28 08:29:54.675189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.489 [2024-11-28 08:29:54.675220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.489 qpair failed and we were unable to recover it. 00:30:57.489 [2024-11-28 08:29:54.675591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.489 [2024-11-28 08:29:54.675620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.489 qpair failed and we were unable to recover it. 00:30:57.489 [2024-11-28 08:29:54.675986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.489 [2024-11-28 08:29:54.676015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.489 qpair failed and we were unable to recover it. 00:30:57.489 [2024-11-28 08:29:54.676390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.489 [2024-11-28 08:29:54.676421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.489 qpair failed and we were unable to recover it. 00:30:57.489 [2024-11-28 08:29:54.676746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.489 [2024-11-28 08:29:54.676775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.489 qpair failed and we were unable to recover it. 00:30:57.489 [2024-11-28 08:29:54.677133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.489 [2024-11-28 08:29:54.677188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.489 qpair failed and we were unable to recover it. 00:30:57.489 [2024-11-28 08:29:54.677419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.489 [2024-11-28 08:29:54.677448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.489 qpair failed and we were unable to recover it. 00:30:57.489 [2024-11-28 08:29:54.677728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.489 [2024-11-28 08:29:54.677757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.489 qpair failed and we were unable to recover it. 00:30:57.489 [2024-11-28 08:29:54.678117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.489 [2024-11-28 08:29:54.678146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.489 qpair failed and we were unable to recover it. 00:30:57.489 [2024-11-28 08:29:54.678526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.489 [2024-11-28 08:29:54.678556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.489 qpair failed and we were unable to recover it. 00:30:57.489 [2024-11-28 08:29:54.678922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.489 [2024-11-28 08:29:54.678951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.489 qpair failed and we were unable to recover it. 00:30:57.489 [2024-11-28 08:29:54.679155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.489 [2024-11-28 08:29:54.679196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.489 qpair failed and we were unable to recover it. 00:30:57.489 [2024-11-28 08:29:54.679403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.489 [2024-11-28 08:29:54.679432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.489 qpair failed and we were unable to recover it. 00:30:57.489 [2024-11-28 08:29:54.679812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.489 [2024-11-28 08:29:54.679841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.489 qpair failed and we were unable to recover it. 00:30:57.489 [2024-11-28 08:29:54.680185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.489 [2024-11-28 08:29:54.680217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.489 qpair failed and we were unable to recover it. 00:30:57.489 [2024-11-28 08:29:54.680569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.489 [2024-11-28 08:29:54.680604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.489 qpair failed and we were unable to recover it. 00:30:57.489 [2024-11-28 08:29:54.680824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.489 [2024-11-28 08:29:54.680852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.489 qpair failed and we were unable to recover it. 00:30:57.489 [2024-11-28 08:29:54.681183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.489 [2024-11-28 08:29:54.681213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.489 qpair failed and we were unable to recover it. 00:30:57.489 [2024-11-28 08:29:54.681428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.489 [2024-11-28 08:29:54.681457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.489 qpair failed and we were unable to recover it. 00:30:57.489 [2024-11-28 08:29:54.681765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.489 [2024-11-28 08:29:54.681795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.489 qpair failed and we were unable to recover it. 00:30:57.489 [2024-11-28 08:29:54.682119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.489 [2024-11-28 08:29:54.682148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.489 qpair failed and we were unable to recover it. 00:30:57.489 [2024-11-28 08:29:54.682504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.489 [2024-11-28 08:29:54.682533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.489 qpair failed and we were unable to recover it. 00:30:57.489 [2024-11-28 08:29:54.682765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.489 [2024-11-28 08:29:54.682793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.489 qpair failed and we were unable to recover it. 00:30:57.489 [2024-11-28 08:29:54.683016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.489 [2024-11-28 08:29:54.683045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.489 qpair failed and we were unable to recover it. 00:30:57.489 [2024-11-28 08:29:54.683385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.489 [2024-11-28 08:29:54.683416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.489 qpair failed and we were unable to recover it. 00:30:57.489 [2024-11-28 08:29:54.683770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.489 [2024-11-28 08:29:54.683799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.489 qpair failed and we were unable to recover it. 00:30:57.489 [2024-11-28 08:29:54.684140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.489 [2024-11-28 08:29:54.684189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.489 qpair failed and we were unable to recover it. 00:30:57.489 [2024-11-28 08:29:54.684416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.489 [2024-11-28 08:29:54.684445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.489 qpair failed and we were unable to recover it. 00:30:57.489 [2024-11-28 08:29:54.684811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.489 [2024-11-28 08:29:54.684840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.489 qpair failed and we were unable to recover it. 00:30:57.489 [2024-11-28 08:29:54.685193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.489 [2024-11-28 08:29:54.685225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.489 qpair failed and we were unable to recover it. 00:30:57.489 [2024-11-28 08:29:54.685648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.489 [2024-11-28 08:29:54.685677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.489 qpair failed and we were unable to recover it. 00:30:57.489 [2024-11-28 08:29:54.686004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.686033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.686421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.686451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.686794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.686823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.686918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.686948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.687290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.687320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.687689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.687717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.688068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.688098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.688233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.688263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.688619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.688648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.688991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.689021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.689402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.689432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.689694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.689728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.690091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.690120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.690378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.690407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.690647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.690675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.690915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.690944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.691295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.691325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.691573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.691601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.691972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.692001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.692388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.692418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.692640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.692668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.692996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.693026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.693363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.693394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.693754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.693783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.694013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.694043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.694406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.694436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.694673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.694701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.695055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.695084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.695457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.695487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.695799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.695828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.696030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.696059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.696411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.696441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.696792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.696821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.697170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.697201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.697300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.697329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.697561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.697590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.697977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.698006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.698323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.698353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.490 qpair failed and we were unable to recover it. 00:30:57.490 [2024-11-28 08:29:54.698693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.490 [2024-11-28 08:29:54.698727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.698874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.698902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.699123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.699151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.699369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.699399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.699733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.699761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.699994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.700022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.700359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.700389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.700721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.700750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.700870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.700903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.701139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.701176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.701516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.701544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.701747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.701775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.702024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.702054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.702242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.702272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.702646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.702676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.703027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.703057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.703394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.703424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.703788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.703816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.704046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.704074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.704220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.704248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.704601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.704630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.704985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.705015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.705124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.705157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc0c0 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.705584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.705693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.706085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.706123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.706589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.706683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.707102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.707139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.707528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.707623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.707927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.707965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.708457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.708550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.708956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.708994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.709352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.709386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.709592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.709621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.709977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.710006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.710170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.710205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.710632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.710662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.711020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.711050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.711280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.711310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.491 qpair failed and we were unable to recover it. 00:30:57.491 [2024-11-28 08:29:54.711548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.491 [2024-11-28 08:29:54.711578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.711888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.711916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.712267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.712298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.712547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.712577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.712916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.712946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.713175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.713207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.713438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.713468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.713820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.713850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.714205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.714236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.714600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.714630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.714840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.714873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.715078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.715107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.715483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.715513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.715850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.715880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.716242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.716272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.716504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.716533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.716720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.716756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.716970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.716998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.717366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.717397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.717609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.717637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.718010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.718039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.718262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.718292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.718656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.718685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.719032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.719061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.719296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.719326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.719718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.719747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.720121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.720150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.720511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.720542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.720905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.720935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.721271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.721351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.721624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.721657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.721863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.721892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.722145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.722188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.722633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.722663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.722979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.723008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.723371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.723402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.723799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.723828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.724157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.724205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.492 qpair failed and we were unable to recover it. 00:30:57.492 [2024-11-28 08:29:54.724432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.492 [2024-11-28 08:29:54.724461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.724607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.724635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.725017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.725046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.725410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.725441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.725675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.725706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.725940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.725970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.726194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.726224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.726626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.726656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.726932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.726960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.727328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.727358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.727712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.727740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.728097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.728126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.728545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.728576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.728800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.728828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.729181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.729212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.729532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.729561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.729915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.729943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.730262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.730292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.730667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.730701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.730901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.730929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.731274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.731304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.731499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.731527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.731760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.731789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.732143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.732181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.732517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.732545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.732902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.732931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.733266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.733297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.733624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.733653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.734019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.734048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.734384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.734414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.734766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.734794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.735105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.735134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.735478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.735508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.735730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.735758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.736113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.736143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.736517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.736547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.736746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.736774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.737131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.737169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.493 qpair failed and we were unable to recover it. 00:30:57.493 [2024-11-28 08:29:54.737529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.493 [2024-11-28 08:29:54.737557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.494 qpair failed and we were unable to recover it. 00:30:57.494 [2024-11-28 08:29:54.737907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.494 [2024-11-28 08:29:54.737937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.494 qpair failed and we were unable to recover it. 00:30:57.494 [2024-11-28 08:29:54.738286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.494 [2024-11-28 08:29:54.738317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.494 qpair failed and we were unable to recover it. 00:30:57.494 [2024-11-28 08:29:54.738649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.494 [2024-11-28 08:29:54.738678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.494 qpair failed and we were unable to recover it. 00:30:57.494 [2024-11-28 08:29:54.738902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.494 [2024-11-28 08:29:54.738933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.494 qpair failed and we were unable to recover it. 00:30:57.494 [2024-11-28 08:29:54.739332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.494 [2024-11-28 08:29:54.739361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.494 qpair failed and we were unable to recover it. 00:30:57.494 [2024-11-28 08:29:54.739714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.494 [2024-11-28 08:29:54.739743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.494 qpair failed and we were unable to recover it. 00:30:57.494 [2024-11-28 08:29:54.740095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.494 [2024-11-28 08:29:54.740125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.494 qpair failed and we were unable to recover it. 00:30:57.494 [2024-11-28 08:29:54.740357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.494 [2024-11-28 08:29:54.740387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.494 qpair failed and we were unable to recover it. 00:30:57.494 [2024-11-28 08:29:54.740751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.494 [2024-11-28 08:29:54.740780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.494 qpair failed and we were unable to recover it. 00:30:57.494 [2024-11-28 08:29:54.741137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.494 [2024-11-28 08:29:54.741175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.494 qpair failed and we were unable to recover it. 00:30:57.494 [2024-11-28 08:29:54.741560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.494 [2024-11-28 08:29:54.741589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.494 qpair failed and we were unable to recover it. 00:30:57.494 [2024-11-28 08:29:54.741951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.494 [2024-11-28 08:29:54.741979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.494 qpair failed and we were unable to recover it. 00:30:57.494 [2024-11-28 08:29:54.742333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.494 [2024-11-28 08:29:54.742364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.494 qpair failed and we were unable to recover it. 00:30:57.494 [2024-11-28 08:29:54.742748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.494 [2024-11-28 08:29:54.742777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.494 qpair failed and we were unable to recover it. 00:30:57.494 [2024-11-28 08:29:54.743030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.494 [2024-11-28 08:29:54.743057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.494 qpair failed and we were unable to recover it. 00:30:57.494 [2024-11-28 08:29:54.743342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.494 [2024-11-28 08:29:54.743371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.494 qpair failed and we were unable to recover it. 00:30:57.494 [2024-11-28 08:29:54.743720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.494 [2024-11-28 08:29:54.743749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.494 qpair failed and we were unable to recover it. 00:30:57.494 [2024-11-28 08:29:54.744107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.494 [2024-11-28 08:29:54.744136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.494 qpair failed and we were unable to recover it. 00:30:57.494 [2024-11-28 08:29:54.744528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.494 [2024-11-28 08:29:54.744558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.494 qpair failed and we were unable to recover it. 00:30:57.494 [2024-11-28 08:29:54.744906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.494 [2024-11-28 08:29:54.744941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.494 qpair failed and we were unable to recover it. 00:30:57.494 [2024-11-28 08:29:54.745055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.494 [2024-11-28 08:29:54.745087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.494 qpair failed and we were unable to recover it. 00:30:57.494 [2024-11-28 08:29:54.745183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.494 [2024-11-28 08:29:54.745213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.494 qpair failed and we were unable to recover it. 00:30:57.494 [2024-11-28 08:29:54.745565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.494 [2024-11-28 08:29:54.745594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.494 qpair failed and we were unable to recover it. 00:30:57.494 [2024-11-28 08:29:54.745812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.494 [2024-11-28 08:29:54.745840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.494 qpair failed and we were unable to recover it. 00:30:57.776 [2024-11-28 08:29:54.746195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.776 [2024-11-28 08:29:54.746227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.776 qpair failed and we were unable to recover it. 00:30:57.776 [2024-11-28 08:29:54.746579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.776 [2024-11-28 08:29:54.746610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.776 qpair failed and we were unable to recover it. 00:30:57.776 [2024-11-28 08:29:54.746962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.776 [2024-11-28 08:29:54.747000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.776 qpair failed and we were unable to recover it. 00:30:57.776 [2024-11-28 08:29:54.747354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.776 [2024-11-28 08:29:54.747384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.776 qpair failed and we were unable to recover it. 00:30:57.776 [2024-11-28 08:29:54.747714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.776 [2024-11-28 08:29:54.747743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.776 qpair failed and we were unable to recover it. 00:30:57.776 [2024-11-28 08:29:54.747950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.776 [2024-11-28 08:29:54.747979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.776 qpair failed and we were unable to recover it. 00:30:57.776 [2024-11-28 08:29:54.748330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.776 [2024-11-28 08:29:54.748360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.776 qpair failed and we were unable to recover it. 00:30:57.776 [2024-11-28 08:29:54.748707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.776 [2024-11-28 08:29:54.748737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.776 qpair failed and we were unable to recover it. 00:30:57.776 [2024-11-28 08:29:54.749073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.776 [2024-11-28 08:29:54.749102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.776 qpair failed and we were unable to recover it. 00:30:57.776 [2024-11-28 08:29:54.749450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.776 [2024-11-28 08:29:54.749480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.776 qpair failed and we were unable to recover it. 00:30:57.776 [2024-11-28 08:29:54.749703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.776 [2024-11-28 08:29:54.749732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.776 qpair failed and we were unable to recover it. 00:30:57.776 [2024-11-28 08:29:54.749940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.776 [2024-11-28 08:29:54.749970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.776 qpair failed and we were unable to recover it. 00:30:57.776 [2024-11-28 08:29:54.750296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.776 [2024-11-28 08:29:54.750326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.776 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.750666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.750695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.750897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.750926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.751154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.751194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.751492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.751521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.751863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.751892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.752110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.752138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.752506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.752537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.752878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.752907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.753268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.753298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.753517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.753546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.753872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.753902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.754242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.754272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.754634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.754663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.755023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.755052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.755417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.755446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.755807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.755835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.756057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.756085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.756439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.756469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.756686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.756714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.757077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.757106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.757490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.757520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.757845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.757874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.758207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.758244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.758588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.758618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.758964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.758992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.759165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.759194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.759474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.759507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.759754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.759783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.760156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.760205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.760539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.760570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.760935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.760963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.761313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.761344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.761689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.761719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.762081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.762110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.762375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.762405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.762635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.762667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.777 [2024-11-28 08:29:54.763009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.777 [2024-11-28 08:29:54.763039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.777 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.763280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.763314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.763560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.763588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.763963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.763992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.764328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.764358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.764577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.764605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.764928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.764957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.765170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.765201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.765548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.765576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.765801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.765829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.766169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.766199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.766418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.766446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.766793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.766822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.767183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.767214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.767413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.767441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.767758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.767787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.768141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.768196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.768328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.768358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.768706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.768735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.768973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.769001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.769339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.769369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.769701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.769730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.770074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.770102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.770477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.770507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.770841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.770870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.771189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.771220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.771627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.771662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.771988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.772018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.772379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.772410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.772644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.772675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.772908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.772937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.773306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.773337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.773704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.773734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.773935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.773964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.774101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.774129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.774503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.774533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.774734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.774762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.775123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.775151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.778 qpair failed and we were unable to recover it. 00:30:57.778 [2024-11-28 08:29:54.775506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.778 [2024-11-28 08:29:54.775536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.775887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.775915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.776276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.776308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.776660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.776689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.776921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.776949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.777315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.777345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.777704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.777733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.778093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.778128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.778501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.778532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.778880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.778909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.779234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.779263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.779540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.779570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.779904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.779933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.780277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.780308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.780681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.780711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.781026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.781055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.781388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.781419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.781623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.781651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.781857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.781886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.782195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.782225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.782560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.782588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.782898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.782926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.783264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.783293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.783492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.783519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.783893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.783922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.784193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.784224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.784590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.784618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.784977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.785006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.785348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.785385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.785749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.785778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.786104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.786133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.786382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.786413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.786759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.786788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.787012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.787040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.787383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.787412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.787771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.787800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.788133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.788172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.788405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.779 [2024-11-28 08:29:54.788433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.779 qpair failed and we were unable to recover it. 00:30:57.779 [2024-11-28 08:29:54.788790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.788820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.789074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.789103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.789472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.789502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.789849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.789879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.790237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.790268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.790662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.790691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.791009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.791037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.791387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.791417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.791765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.791794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.792140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.792188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.792516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.792546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.792885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.792914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.793166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.793198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.793561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.793590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.793803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.793835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.794229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.794260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.794596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.794625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.794858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.794887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.795237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.795268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.795623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.795653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.796007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.796037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.796282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.796314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.796592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.796621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.796957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.796987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.797334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.797363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.797717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.797746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.798098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.798126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.798369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.798399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.798725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.798755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.799093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.799122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.799472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.799509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.799748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.799780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.800111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.800140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.800493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.800522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.800864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.800893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.801001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.801033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.801387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.801418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.780 [2024-11-28 08:29:54.801769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.780 [2024-11-28 08:29:54.801797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.780 qpair failed and we were unable to recover it. 00:30:57.781 [2024-11-28 08:29:54.802131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.781 [2024-11-28 08:29:54.802170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.781 qpair failed and we were unable to recover it. 00:30:57.781 [2024-11-28 08:29:54.802582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.781 [2024-11-28 08:29:54.802611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.781 qpair failed and we were unable to recover it. 00:30:57.781 [2024-11-28 08:29:54.802944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.781 [2024-11-28 08:29:54.802972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.781 qpair failed and we were unable to recover it. 00:30:57.781 [2024-11-28 08:29:54.803316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.781 [2024-11-28 08:29:54.803346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.781 qpair failed and we were unable to recover it. 00:30:57.781 [2024-11-28 08:29:54.803693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.781 [2024-11-28 08:29:54.803721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.781 qpair failed and we were unable to recover it. 00:30:57.781 [2024-11-28 08:29:54.804066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.781 [2024-11-28 08:29:54.804095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.781 qpair failed and we were unable to recover it. 00:30:57.781 [2024-11-28 08:29:54.804453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.781 [2024-11-28 08:29:54.804484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.781 qpair failed and we were unable to recover it. 00:30:57.781 [2024-11-28 08:29:54.804707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.781 [2024-11-28 08:29:54.804736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.781 qpair failed and we were unable to recover it. 00:30:57.781 [2024-11-28 08:29:54.805087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.781 [2024-11-28 08:29:54.805116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.781 qpair failed and we were unable to recover it. 00:30:57.781 [2024-11-28 08:29:54.805464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.781 [2024-11-28 08:29:54.805494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.781 qpair failed and we were unable to recover it. 00:30:57.781 [2024-11-28 08:29:54.805824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.781 [2024-11-28 08:29:54.805853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.781 qpair failed and we were unable to recover it. 00:30:57.781 [2024-11-28 08:29:54.806193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.781 [2024-11-28 08:29:54.806224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.781 qpair failed and we were unable to recover it. 00:30:57.781 [2024-11-28 08:29:54.806456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.781 [2024-11-28 08:29:54.806486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.781 qpair failed and we were unable to recover it. 00:30:57.781 [2024-11-28 08:29:54.806830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.781 [2024-11-28 08:29:54.806860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.781 qpair failed and we were unable to recover it. 00:30:57.781 [2024-11-28 08:29:54.807064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.781 [2024-11-28 08:29:54.807095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.781 qpair failed and we were unable to recover it. 00:30:57.781 [2024-11-28 08:29:54.807433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.781 [2024-11-28 08:29:54.807463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.781 qpair failed and we were unable to recover it. 00:30:57.781 [2024-11-28 08:29:54.807809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.781 [2024-11-28 08:29:54.807838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.781 qpair failed and we were unable to recover it. 00:30:57.781 [2024-11-28 08:29:54.808186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.781 [2024-11-28 08:29:54.808216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.781 qpair failed and we were unable to recover it. 00:30:57.781 [2024-11-28 08:29:54.808431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.781 [2024-11-28 08:29:54.808459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.781 qpair failed and we were unable to recover it. 00:30:57.781 [2024-11-28 08:29:54.808865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.781 [2024-11-28 08:29:54.808895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.781 qpair failed and we were unable to recover it. 00:30:57.781 [2024-11-28 08:29:54.809231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.781 [2024-11-28 08:29:54.809261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.781 qpair failed and we were unable to recover it. 00:30:57.781 [2024-11-28 08:29:54.809461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.781 [2024-11-28 08:29:54.809489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.781 qpair failed and we were unable to recover it. 00:30:57.781 [2024-11-28 08:29:54.809869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.781 [2024-11-28 08:29:54.809898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.781 qpair failed and we were unable to recover it. 00:30:57.781 [2024-11-28 08:29:54.810231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.781 [2024-11-28 08:29:54.810262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.781 qpair failed and we were unable to recover it. 00:30:57.781 [2024-11-28 08:29:54.810624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.781 [2024-11-28 08:29:54.810654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.781 qpair failed and we were unable to recover it. 00:30:57.781 [2024-11-28 08:29:54.810924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.781 [2024-11-28 08:29:54.810952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.781 qpair failed and we were unable to recover it. 00:30:57.781 [2024-11-28 08:29:54.811290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.781 [2024-11-28 08:29:54.811320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.781 qpair failed and we were unable to recover it. 00:30:57.781 [2024-11-28 08:29:54.811547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.781 [2024-11-28 08:29:54.811575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.781 qpair failed and we were unable to recover it. 00:30:57.781 [2024-11-28 08:29:54.811932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.781 [2024-11-28 08:29:54.811960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.781 qpair failed and we were unable to recover it. 00:30:57.781 [2024-11-28 08:29:54.812298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.781 [2024-11-28 08:29:54.812328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.781 qpair failed and we were unable to recover it. 00:30:57.781 [2024-11-28 08:29:54.812545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.781 [2024-11-28 08:29:54.812576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.781 qpair failed and we were unable to recover it. 00:30:57.781 [2024-11-28 08:29:54.812962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.812991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.813331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.813368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.813715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.813746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.813952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.813981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.814299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.814329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.814663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.814692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.815037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.815066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.815285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.815314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.815544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.815573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.815802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.815830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.816192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.816222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.816574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.816603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.816827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.816859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.817118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.817147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.817372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.817402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.817642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.817670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.817895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.817924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.818277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.818308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.818627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.818656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.819003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.819032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.819402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.819433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.819779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.819808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.820030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.820058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.820408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.820439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.820527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.820554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.820736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.820815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.821073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.821107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.821582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.821676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.821945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.821982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.822409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.822503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.822774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.822811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.823174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.823207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.823550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.823580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.823814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.823842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.824088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.824121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.824350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.824381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.824645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.824680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.824838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.782 [2024-11-28 08:29:54.824868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.782 qpair failed and we were unable to recover it. 00:30:57.782 [2024-11-28 08:29:54.825228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.825258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.825608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.825637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.825982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.826011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.826374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.826416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.826750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.826779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.827107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.827136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.827522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.827552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.827767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.827800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.828131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.828171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.828511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.828541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.828876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.828905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.829230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.829260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.829633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.829662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.829999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.830028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.830224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.830254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.830583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.830612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.830828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.830857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.831230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.831260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.831613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.831642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.831971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.832000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.832353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.832383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.832740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.832769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.833107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.833136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.833495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.833525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.833874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.833903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.834122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.834150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.834524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.834553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.834768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.834796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.835035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.835073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.835401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.835432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.835760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.835790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.836123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.836151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.836502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.836532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.836889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.836917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.837122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.837151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.837361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.837390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.837772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.837801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.838133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.783 [2024-11-28 08:29:54.838170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.783 qpair failed and we were unable to recover it. 00:30:57.783 [2024-11-28 08:29:54.838501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.838530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.838724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.838752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.839005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.839037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.839380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.839410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.839618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.839646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.839994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.840030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.840442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.840472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.840808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.840837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.841199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.841228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.841558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.841586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.841831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.841859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.841952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.841980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.842309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.842338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.842718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.842746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.842961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.842989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.843329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.843358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.843716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.843744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.843961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.843989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.844366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.844397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.844613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.844642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.844986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.845015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.845221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.845251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.845590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.845618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.845969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.845998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.846325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.846355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.846593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.846621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.846963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.846991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.847356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.847386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.847727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.847755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.848107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.848135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.848507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.848537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.848858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.848886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.849105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.849137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.849514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.849543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.849874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.849901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.850120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.850147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.850395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.850423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.784 qpair failed and we were unable to recover it. 00:30:57.784 [2024-11-28 08:29:54.850764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.784 [2024-11-28 08:29:54.850791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.785 qpair failed and we were unable to recover it. 00:30:57.785 [2024-11-28 08:29:54.851124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.785 [2024-11-28 08:29:54.851151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.785 qpair failed and we were unable to recover it. 00:30:57.785 [2024-11-28 08:29:54.851402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.785 [2024-11-28 08:29:54.851430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.785 qpair failed and we were unable to recover it. 00:30:57.785 [2024-11-28 08:29:54.851771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.785 [2024-11-28 08:29:54.851799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.785 qpair failed and we were unable to recover it. 00:30:57.785 [2024-11-28 08:29:54.852156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.785 [2024-11-28 08:29:54.852193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.785 qpair failed and we were unable to recover it. 00:30:57.785 [2024-11-28 08:29:54.852538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.785 [2024-11-28 08:29:54.852566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.785 qpair failed and we were unable to recover it. 00:30:57.785 [2024-11-28 08:29:54.852923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.785 [2024-11-28 08:29:54.852951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.785 qpair failed and we were unable to recover it. 00:30:57.785 [2024-11-28 08:29:54.853322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.785 [2024-11-28 08:29:54.853352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.785 qpair failed and we were unable to recover it. 00:30:57.785 [2024-11-28 08:29:54.853558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.785 [2024-11-28 08:29:54.853592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.785 qpair failed and we were unable to recover it. 00:30:57.785 [2024-11-28 08:29:54.853930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.785 [2024-11-28 08:29:54.853958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.785 qpair failed and we were unable to recover it. 00:30:57.785 [2024-11-28 08:29:54.854296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.785 [2024-11-28 08:29:54.854325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.785 qpair failed and we were unable to recover it. 00:30:57.785 [2024-11-28 08:29:54.854695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.785 [2024-11-28 08:29:54.854725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.785 qpair failed and we were unable to recover it. 00:30:57.785 [2024-11-28 08:29:54.854933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.785 [2024-11-28 08:29:54.854962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.785 qpair failed and we were unable to recover it. 00:30:57.785 [2024-11-28 08:29:54.855311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.785 [2024-11-28 08:29:54.855341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.785 qpair failed and we were unable to recover it. 00:30:57.785 [2024-11-28 08:29:54.855543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.785 [2024-11-28 08:29:54.855572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.785 qpair failed and we were unable to recover it. 00:30:57.785 [2024-11-28 08:29:54.855918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.785 [2024-11-28 08:29:54.855947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.785 qpair failed and we were unable to recover it. 00:30:57.785 [2024-11-28 08:29:54.856287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.785 [2024-11-28 08:29:54.856317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.785 qpair failed and we were unable to recover it. 00:30:57.785 [2024-11-28 08:29:54.856527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.785 [2024-11-28 08:29:54.856556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.785 qpair failed and we were unable to recover it. 00:30:57.785 [2024-11-28 08:29:54.856906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.785 [2024-11-28 08:29:54.856937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.785 qpair failed and we were unable to recover it. 00:30:57.785 [2024-11-28 08:29:54.857200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.785 [2024-11-28 08:29:54.857231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.785 qpair failed and we were unable to recover it. 00:30:57.785 [2024-11-28 08:29:54.857576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.785 [2024-11-28 08:29:54.857607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.785 qpair failed and we were unable to recover it. 00:30:57.785 [2024-11-28 08:29:54.857960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.785 [2024-11-28 08:29:54.857991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.785 qpair failed and we were unable to recover it. 00:30:57.785 [2024-11-28 08:29:54.858348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.785 [2024-11-28 08:29:54.858379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.785 qpair failed and we were unable to recover it. 00:30:57.785 [2024-11-28 08:29:54.858709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.785 [2024-11-28 08:29:54.858738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.785 qpair failed and we were unable to recover it. 00:30:57.785 [2024-11-28 08:29:54.859086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.785 [2024-11-28 08:29:54.859115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.785 qpair failed and we were unable to recover it. 00:30:57.785 [2024-11-28 08:29:54.859452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.785 [2024-11-28 08:29:54.859482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.785 qpair failed and we were unable to recover it. 00:30:57.785 [2024-11-28 08:29:54.859713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.785 [2024-11-28 08:29:54.859744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.785 qpair failed and we were unable to recover it. 00:30:57.785 [2024-11-28 08:29:54.860011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.785 [2024-11-28 08:29:54.860039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.785 qpair failed and we were unable to recover it. 00:30:57.785 [2024-11-28 08:29:54.860402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.785 [2024-11-28 08:29:54.860433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.785 qpair failed and we were unable to recover it. 00:30:57.785 [2024-11-28 08:29:54.860794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.785 [2024-11-28 08:29:54.860823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.785 qpair failed and we were unable to recover it. 00:30:57.785 [2024-11-28 08:29:54.861195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.785 [2024-11-28 08:29:54.861226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.785 qpair failed and we were unable to recover it. 00:30:57.785 [2024-11-28 08:29:54.861576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.785 [2024-11-28 08:29:54.861606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.785 qpair failed and we were unable to recover it. 00:30:57.785 [2024-11-28 08:29:54.861863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.785 [2024-11-28 08:29:54.861892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.785 qpair failed and we were unable to recover it. 00:30:57.785 [2024-11-28 08:29:54.862232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.785 [2024-11-28 08:29:54.862263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.785 qpair failed and we were unable to recover it. 00:30:57.785 [2024-11-28 08:29:54.862507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.862536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.862872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.862902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.863249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.863280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.863486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.863515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.863740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.863769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.864121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.864149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.864404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.864434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.864773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.864803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.865180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.865211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.865451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.865479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.865842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.865870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.866101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.866130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.866333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.866363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.866703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.866731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.867088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.867117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.867266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.867297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.867515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.867544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.867876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.867905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.868226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.868256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.868483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.868511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.868877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.868905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.869260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.869291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.869645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.869675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.870029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.870058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.870284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.870316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.870648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.870677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.871034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.871064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.871258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.871287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.871525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.871555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.871898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.871928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.872277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.872307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.872672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.872707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.873075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.873104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.873436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.873467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.873733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.873762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.874121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.874150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.874418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.874448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.786 [2024-11-28 08:29:54.874785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.786 [2024-11-28 08:29:54.874814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.786 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.875175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.875205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.875557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.875587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.875962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.875991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.876342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.876378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.876734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.876763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.877088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.877118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.877477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.877507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.877798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.877826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.878185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.878215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.878546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.878575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.878926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.878955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.879317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.879347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.879552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.879580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.879891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.879920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.880156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.880213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.880573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.880603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.880811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.880839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.881055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.881085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.881457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.881488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.881799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.881829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.882170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.882200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.882559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.882589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.882943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.882973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.883314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.883345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.883704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.883733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.884083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.884111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.884528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.884558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.884894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.884923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.885272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.885302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.885544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.885573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.885936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.885966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.886219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.886251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.886630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.886658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.886866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.886895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.887236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.887266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.887553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.887582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.887924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.887952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.787 [2024-11-28 08:29:54.888295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.787 [2024-11-28 08:29:54.888327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.787 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.888435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.888463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.888799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.888827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.889040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.889069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.889506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.889538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.889865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.889893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.890124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.890184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.890524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.890553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.890904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.890933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.891135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.891173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.891399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.891428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.891741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.891771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.891969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.891998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.892245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.892274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.892614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.892642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.892954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.892983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.893348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.893379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.893714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.893743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.894079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.894108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.894364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.894394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.894763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.894793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.895133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.895170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.895502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.895531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.895755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.895783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.896008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.896038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.896339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.896368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.896713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.896742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.897101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.897130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.897341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.897370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.897738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.897767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.897982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.898010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.898367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.898399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.898728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.898758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.898996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.899024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.899266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.899296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.899664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.899692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.900077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.900105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.900514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.900546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.788 [2024-11-28 08:29:54.900911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.788 [2024-11-28 08:29:54.900939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.788 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.901176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.901207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.901561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.901589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.901971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.902000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.902361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.902392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.902734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.902763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.903115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.903144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.903364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.903394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.903782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.903817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.904157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.904196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.904559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.904588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.904933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.904962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.905055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.905084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.905329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.905359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.905780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.905809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.906002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.906030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.906369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.906401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.906618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.906647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.907030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.907059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.907430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.907460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.907690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.907719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.908077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.908106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.908344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.908373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.908702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.908731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.909071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.909100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.909320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.909349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.909701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.909730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.910194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.910225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.910555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.910584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.910824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.910851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.911197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.911227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.911517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.911545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.911887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.911915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.912283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.912313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.912688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.912717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.913088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.913117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.913382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.913411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.913762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.789 [2024-11-28 08:29:54.913790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.789 qpair failed and we were unable to recover it. 00:30:57.789 [2024-11-28 08:29:54.914168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.790 [2024-11-28 08:29:54.914198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.790 qpair failed and we were unable to recover it. 00:30:57.790 [2024-11-28 08:29:54.914541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.790 [2024-11-28 08:29:54.914570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.790 qpair failed and we were unable to recover it. 00:30:57.790 [2024-11-28 08:29:54.914920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.790 [2024-11-28 08:29:54.914949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.790 qpair failed and we were unable to recover it. 00:30:57.790 [2024-11-28 08:29:54.915288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.790 [2024-11-28 08:29:54.915318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.790 qpair failed and we were unable to recover it. 00:30:57.790 [2024-11-28 08:29:54.915641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.790 [2024-11-28 08:29:54.915670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.790 qpair failed and we were unable to recover it. 00:30:57.790 [2024-11-28 08:29:54.916016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.790 [2024-11-28 08:29:54.916043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.790 qpair failed and we were unable to recover it. 00:30:57.790 [2024-11-28 08:29:54.916234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.790 [2024-11-28 08:29:54.916262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.790 qpair failed and we were unable to recover it. 00:30:57.790 [2024-11-28 08:29:54.916641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.790 [2024-11-28 08:29:54.916669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.790 qpair failed and we were unable to recover it. 00:30:57.790 [2024-11-28 08:29:54.917021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.790 [2024-11-28 08:29:54.917049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.790 qpair failed and we were unable to recover it. 00:30:57.790 [2024-11-28 08:29:54.917401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.790 [2024-11-28 08:29:54.917431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.790 qpair failed and we were unable to recover it. 00:30:57.790 [2024-11-28 08:29:54.917761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.790 [2024-11-28 08:29:54.917796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.790 qpair failed and we were unable to recover it. 00:30:57.790 [2024-11-28 08:29:54.918014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.790 [2024-11-28 08:29:54.918043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.790 qpair failed and we were unable to recover it. 00:30:57.790 [2024-11-28 08:29:54.918268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.790 [2024-11-28 08:29:54.918298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.790 qpair failed and we were unable to recover it. 00:30:57.790 [2024-11-28 08:29:54.918499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.790 [2024-11-28 08:29:54.918528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.790 qpair failed and we were unable to recover it. 00:30:57.790 [2024-11-28 08:29:54.918834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.790 [2024-11-28 08:29:54.918863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.790 qpair failed and we were unable to recover it. 00:30:57.790 [2024-11-28 08:29:54.919072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.790 [2024-11-28 08:29:54.919100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.790 qpair failed and we were unable to recover it. 00:30:57.790 [2024-11-28 08:29:54.919332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.790 [2024-11-28 08:29:54.919362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.790 qpair failed and we were unable to recover it. 00:30:57.790 [2024-11-28 08:29:54.919692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.790 [2024-11-28 08:29:54.919720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.790 qpair failed and we were unable to recover it. 00:30:57.790 [2024-11-28 08:29:54.920020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.790 [2024-11-28 08:29:54.920049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.790 qpair failed and we were unable to recover it. 00:30:57.790 [2024-11-28 08:29:54.920207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.790 [2024-11-28 08:29:54.920236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.790 qpair failed and we were unable to recover it. 00:30:57.790 [2024-11-28 08:29:54.920667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.790 [2024-11-28 08:29:54.920695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.790 qpair failed and we were unable to recover it. 00:30:57.790 [2024-11-28 08:29:54.920899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.790 [2024-11-28 08:29:54.920927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.790 qpair failed and we were unable to recover it. 00:30:57.790 [2024-11-28 08:29:54.921279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.790 [2024-11-28 08:29:54.921308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.790 qpair failed and we were unable to recover it. 00:30:57.790 [2024-11-28 08:29:54.921629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.790 [2024-11-28 08:29:54.921657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.790 qpair failed and we were unable to recover it. 00:30:57.790 [2024-11-28 08:29:54.922015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.790 [2024-11-28 08:29:54.922045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.790 qpair failed and we were unable to recover it. 00:30:57.790 [2024-11-28 08:29:54.922394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.790 [2024-11-28 08:29:54.922423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.790 qpair failed and we were unable to recover it. 00:30:57.790 [2024-11-28 08:29:54.922660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.790 [2024-11-28 08:29:54.922689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.790 qpair failed and we were unable to recover it. 00:30:57.790 [2024-11-28 08:29:54.923021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.790 [2024-11-28 08:29:54.923050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.790 qpair failed and we were unable to recover it. 00:30:57.790 [2024-11-28 08:29:54.923291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.790 [2024-11-28 08:29:54.923321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.790 qpair failed and we were unable to recover it. 00:30:57.790 [2024-11-28 08:29:54.923659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.790 [2024-11-28 08:29:54.923688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.790 qpair failed and we were unable to recover it. 00:30:57.790 [2024-11-28 08:29:54.924042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.790 [2024-11-28 08:29:54.924069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.790 qpair failed and we were unable to recover it. 00:30:57.790 [2024-11-28 08:29:54.924407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.790 [2024-11-28 08:29:54.924438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.790 qpair failed and we were unable to recover it. 00:30:57.790 [2024-11-28 08:29:54.924524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.790 [2024-11-28 08:29:54.924552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.790 qpair failed and we were unable to recover it. 00:30:57.790 [2024-11-28 08:29:54.924896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.790 [2024-11-28 08:29:54.924925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.925226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.925255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.925602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.925631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.925968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.925996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.926214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.926244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.926491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.926520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.926880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.926908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.927028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.927056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.927275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.927304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.927656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.927684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.928044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.928073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.928464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.928493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.928838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.928866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.929099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.929128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.929521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.929551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.929922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.929950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.930194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.930223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.930567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.930602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.931007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.931035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.931418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.931448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.931653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.931681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.932060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.932089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.932240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.932268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.932492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.932522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.932883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.932912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.933267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.933296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.933649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.933677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.934012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.934040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.934128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.934155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.934703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.934794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.935481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.935572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.936016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.936054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.936444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.936536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.936969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.937005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.937329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.937362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.937564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.937593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.937935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.937963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.938195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.791 [2024-11-28 08:29:54.938229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.791 qpair failed and we were unable to recover it. 00:30:57.791 [2024-11-28 08:29:54.938591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.792 [2024-11-28 08:29:54.938621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.792 qpair failed and we were unable to recover it. 00:30:57.792 [2024-11-28 08:29:54.938954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.792 [2024-11-28 08:29:54.938983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.792 qpair failed and we were unable to recover it. 00:30:57.792 [2024-11-28 08:29:54.939250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.792 [2024-11-28 08:29:54.939281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.792 qpair failed and we were unable to recover it. 00:30:57.792 [2024-11-28 08:29:54.939630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.792 [2024-11-28 08:29:54.939659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.792 qpair failed and we were unable to recover it. 00:30:57.792 [2024-11-28 08:29:54.940019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.792 [2024-11-28 08:29:54.940048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.792 qpair failed and we were unable to recover it. 00:30:57.792 [2024-11-28 08:29:54.940309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.792 [2024-11-28 08:29:54.940341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.792 qpair failed and we were unable to recover it. 00:30:57.792 [2024-11-28 08:29:54.940555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.792 [2024-11-28 08:29:54.940587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.792 qpair failed and we were unable to recover it. 00:30:57.792 [2024-11-28 08:29:54.940950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.792 [2024-11-28 08:29:54.940979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.792 qpair failed and we were unable to recover it. 00:30:57.792 [2024-11-28 08:29:54.941404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.792 [2024-11-28 08:29:54.941435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.792 qpair failed and we were unable to recover it. 00:30:57.792 [2024-11-28 08:29:54.941759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.792 [2024-11-28 08:29:54.941789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.792 qpair failed and we were unable to recover it. 00:30:57.792 [2024-11-28 08:29:54.941993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.792 [2024-11-28 08:29:54.942021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.792 qpair failed and we were unable to recover it. 00:30:57.792 [2024-11-28 08:29:54.942371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.792 [2024-11-28 08:29:54.942401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.792 qpair failed and we were unable to recover it. 00:30:57.792 [2024-11-28 08:29:54.942751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.792 [2024-11-28 08:29:54.942786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.792 qpair failed and we were unable to recover it. 00:30:57.792 [2024-11-28 08:29:54.943023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.792 [2024-11-28 08:29:54.943054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.792 qpair failed and we were unable to recover it. 00:30:57.792 [2024-11-28 08:29:54.943389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.792 [2024-11-28 08:29:54.943421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.792 qpair failed and we were unable to recover it. 00:30:57.792 [2024-11-28 08:29:54.943751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.792 [2024-11-28 08:29:54.943781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.792 qpair failed and we were unable to recover it. 00:30:57.792 [2024-11-28 08:29:54.944233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.792 [2024-11-28 08:29:54.944263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.792 qpair failed and we were unable to recover it. 00:30:57.792 [2024-11-28 08:29:54.944512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.792 [2024-11-28 08:29:54.944540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.792 qpair failed and we were unable to recover it. 00:30:57.792 [2024-11-28 08:29:54.944889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.792 [2024-11-28 08:29:54.944918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.792 qpair failed and we were unable to recover it. 00:30:57.792 [2024-11-28 08:29:54.945254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.792 [2024-11-28 08:29:54.945291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.792 qpair failed and we were unable to recover it. 00:30:57.792 [2024-11-28 08:29:54.945625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.792 [2024-11-28 08:29:54.945654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.792 qpair failed and we were unable to recover it. 00:30:57.792 [2024-11-28 08:29:54.946000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.792 [2024-11-28 08:29:54.946029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.792 qpair failed and we were unable to recover it. 00:30:57.792 [2024-11-28 08:29:54.946250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.792 [2024-11-28 08:29:54.946281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.792 qpair failed and we were unable to recover it. 00:30:57.792 [2024-11-28 08:29:54.946603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.792 [2024-11-28 08:29:54.946632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.792 qpair failed and we were unable to recover it. 00:30:57.792 [2024-11-28 08:29:54.946985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.792 [2024-11-28 08:29:54.947013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.792 qpair failed and we were unable to recover it. 00:30:57.792 [2024-11-28 08:29:54.947390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.792 [2024-11-28 08:29:54.947423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.792 qpair failed and we were unable to recover it. 00:30:57.792 [2024-11-28 08:29:54.947730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.792 [2024-11-28 08:29:54.947759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.792 qpair failed and we were unable to recover it. 00:30:57.792 [2024-11-28 08:29:54.947962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.792 [2024-11-28 08:29:54.947991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.792 qpair failed and we were unable to recover it. 00:30:57.792 [2024-11-28 08:29:54.948259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.792 [2024-11-28 08:29:54.948288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.792 qpair failed and we were unable to recover it. 00:30:57.792 08:29:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:57.792 [2024-11-28 08:29:54.948521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.792 [2024-11-28 08:29:54.948550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.792 qpair failed and we were unable to recover it. 00:30:57.792 08:29:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:57.792 [2024-11-28 08:29:54.948896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.792 [2024-11-28 08:29:54.948924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.792 qpair failed and we were unable to recover it. 00:30:57.792 08:29:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:57.792 [2024-11-28 08:29:54.949118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.792 [2024-11-28 08:29:54.949154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.792 qpair failed and we were unable to recover it. 00:30:57.792 08:29:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:57.792 08:29:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:57.792 [2024-11-28 08:29:54.949542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.792 [2024-11-28 08:29:54.949572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.792 qpair failed and we were unable to recover it. 00:30:57.792 [2024-11-28 08:29:54.949896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.792 [2024-11-28 08:29:54.949923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.792 qpair failed and we were unable to recover it. 00:30:57.792 [2024-11-28 08:29:54.950278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.792 [2024-11-28 08:29:54.950308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.792 qpair failed and we were unable to recover it. 00:30:57.792 [2024-11-28 08:29:54.950691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.792 [2024-11-28 08:29:54.950721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.792 qpair failed and we were unable to recover it. 00:30:57.792 [2024-11-28 08:29:54.950970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.950998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.951204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.951235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.951579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.951607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.951927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.951956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.952209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.952243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.952624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.952654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.952973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.953006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.953435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.953467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.953711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.953741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.954118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.954147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.954375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.954403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.954768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.954797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.955134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.955176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.955403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.955434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.955780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.955810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.955899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.955927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68cc000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.956451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.956542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.956805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.956844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.957186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.957221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.957592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.957622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.957980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.958010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.958496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.958600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.959028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.959065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.959400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.959432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.959801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.959830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.959993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.960021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.960294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.960325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.960684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.960717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.961028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.961058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.961402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.961432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.961694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.961730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.962065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.962095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.962302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.962334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.962659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.962689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.963054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.963084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.963356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.963389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.963701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.963731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.793 [2024-11-28 08:29:54.964062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.793 [2024-11-28 08:29:54.964092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.793 qpair failed and we were unable to recover it. 00:30:57.794 [2024-11-28 08:29:54.964185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-11-28 08:29:54.964215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.794 qpair failed and we were unable to recover it. 00:30:57.794 [2024-11-28 08:29:54.964554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-11-28 08:29:54.964583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.794 qpair failed and we were unable to recover it. 00:30:57.794 [2024-11-28 08:29:54.964920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-11-28 08:29:54.964951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.794 qpair failed and we were unable to recover it. 00:30:57.794 [2024-11-28 08:29:54.965307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-11-28 08:29:54.965337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.794 qpair failed and we were unable to recover it. 00:30:57.794 [2024-11-28 08:29:54.965657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-11-28 08:29:54.965688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.794 qpair failed and we were unable to recover it. 00:30:57.794 [2024-11-28 08:29:54.966017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-11-28 08:29:54.966046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.794 qpair failed and we were unable to recover it. 00:30:57.794 [2024-11-28 08:29:54.966293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-11-28 08:29:54.966323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.794 qpair failed and we were unable to recover it. 00:30:57.794 [2024-11-28 08:29:54.966543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-11-28 08:29:54.966575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.794 qpair failed and we were unable to recover it. 00:30:57.794 [2024-11-28 08:29:54.966799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-11-28 08:29:54.966828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.794 qpair failed and we were unable to recover it. 00:30:57.794 [2024-11-28 08:29:54.967170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-11-28 08:29:54.967201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.794 qpair failed and we were unable to recover it. 00:30:57.794 [2024-11-28 08:29:54.967462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-11-28 08:29:54.967491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.794 qpair failed and we were unable to recover it. 00:30:57.794 [2024-11-28 08:29:54.967848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-11-28 08:29:54.967877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.794 qpair failed and we were unable to recover it. 00:30:57.794 [2024-11-28 08:29:54.968123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-11-28 08:29:54.968152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.794 qpair failed and we were unable to recover it. 00:30:57.794 [2024-11-28 08:29:54.968388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-11-28 08:29:54.968418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.794 qpair failed and we were unable to recover it. 00:30:57.794 [2024-11-28 08:29:54.968753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-11-28 08:29:54.968782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.794 qpair failed and we were unable to recover it. 00:30:57.794 [2024-11-28 08:29:54.969156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-11-28 08:29:54.969195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.794 qpair failed and we were unable to recover it. 00:30:57.794 [2024-11-28 08:29:54.969543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-11-28 08:29:54.969572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.794 qpair failed and we were unable to recover it. 00:30:57.794 [2024-11-28 08:29:54.969923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-11-28 08:29:54.969952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.794 qpair failed and we were unable to recover it. 00:30:57.794 [2024-11-28 08:29:54.970304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-11-28 08:29:54.970336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.794 qpair failed and we were unable to recover it. 00:30:57.794 [2024-11-28 08:29:54.970678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-11-28 08:29:54.970706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.794 qpair failed and we were unable to recover it. 00:30:57.794 [2024-11-28 08:29:54.971027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-11-28 08:29:54.971056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.794 qpair failed and we were unable to recover it. 00:30:57.794 [2024-11-28 08:29:54.971448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-11-28 08:29:54.971478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.794 qpair failed and we were unable to recover it. 00:30:57.794 [2024-11-28 08:29:54.971673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-11-28 08:29:54.971701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.794 qpair failed and we were unable to recover it. 00:30:57.794 [2024-11-28 08:29:54.972002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-11-28 08:29:54.972043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.794 qpair failed and we were unable to recover it. 00:30:57.794 [2024-11-28 08:29:54.972383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-11-28 08:29:54.972414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.794 qpair failed and we were unable to recover it. 00:30:57.794 [2024-11-28 08:29:54.972743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-11-28 08:29:54.972772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.794 qpair failed and we were unable to recover it. 00:30:57.794 [2024-11-28 08:29:54.973174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-11-28 08:29:54.973205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.794 qpair failed and we were unable to recover it. 00:30:57.794 [2024-11-28 08:29:54.973556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-11-28 08:29:54.973584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.794 qpair failed and we were unable to recover it. 00:30:57.794 [2024-11-28 08:29:54.973908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-11-28 08:29:54.973937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.794 qpair failed and we were unable to recover it. 00:30:57.794 [2024-11-28 08:29:54.974288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-11-28 08:29:54.974320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.794 qpair failed and we were unable to recover it. 00:30:57.794 [2024-11-28 08:29:54.974693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-11-28 08:29:54.974725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.794 qpair failed and we were unable to recover it. 00:30:57.794 [2024-11-28 08:29:54.974916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-11-28 08:29:54.974944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.794 qpair failed and we were unable to recover it. 00:30:57.794 [2024-11-28 08:29:54.975298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-11-28 08:29:54.975328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.794 qpair failed and we were unable to recover it. 00:30:57.794 [2024-11-28 08:29:54.975669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-11-28 08:29:54.975697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.794 qpair failed and we were unable to recover it. 00:30:57.794 [2024-11-28 08:29:54.976026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-11-28 08:29:54.976055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.794 qpair failed and we were unable to recover it. 00:30:57.794 [2024-11-28 08:29:54.976452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-11-28 08:29:54.976482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.794 qpair failed and we were unable to recover it. 00:30:57.795 [2024-11-28 08:29:54.976824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-11-28 08:29:54.976853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.795 qpair failed and we were unable to recover it. 00:30:57.795 [2024-11-28 08:29:54.977063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-11-28 08:29:54.977093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.795 qpair failed and we were unable to recover it. 00:30:57.795 [2024-11-28 08:29:54.977510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-11-28 08:29:54.977541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.795 qpair failed and we were unable to recover it. 00:30:57.795 [2024-11-28 08:29:54.977905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-11-28 08:29:54.977934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.795 qpair failed and we were unable to recover it. 00:30:57.795 [2024-11-28 08:29:54.978284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-11-28 08:29:54.978316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.795 qpair failed and we were unable to recover it. 00:30:57.795 [2024-11-28 08:29:54.978633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-11-28 08:29:54.978662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.795 qpair failed and we were unable to recover it. 00:30:57.795 [2024-11-28 08:29:54.978998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-11-28 08:29:54.979027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.795 qpair failed and we were unable to recover it. 00:30:57.795 [2024-11-28 08:29:54.979393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-11-28 08:29:54.979423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.795 qpair failed and we were unable to recover it. 00:30:57.795 [2024-11-28 08:29:54.979627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-11-28 08:29:54.979654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.795 qpair failed and we were unable to recover it. 00:30:57.795 [2024-11-28 08:29:54.979891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-11-28 08:29:54.979920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.795 qpair failed and we were unable to recover it. 00:30:57.795 [2024-11-28 08:29:54.980263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-11-28 08:29:54.980293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.795 qpair failed and we were unable to recover it. 00:30:57.795 [2024-11-28 08:29:54.980536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-11-28 08:29:54.980564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.795 qpair failed and we were unable to recover it. 00:30:57.795 [2024-11-28 08:29:54.980895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-11-28 08:29:54.980924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.795 qpair failed and we were unable to recover it. 00:30:57.795 [2024-11-28 08:29:54.981298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-11-28 08:29:54.981328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.795 qpair failed and we were unable to recover it. 00:30:57.795 [2024-11-28 08:29:54.981680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-11-28 08:29:54.981709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.795 qpair failed and we were unable to recover it. 00:30:57.795 [2024-11-28 08:29:54.982067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-11-28 08:29:54.982097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.795 qpair failed and we were unable to recover it. 00:30:57.795 [2024-11-28 08:29:54.982514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-11-28 08:29:54.982546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.795 qpair failed and we were unable to recover it. 00:30:57.795 [2024-11-28 08:29:54.982854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-11-28 08:29:54.982884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.795 qpair failed and we were unable to recover it. 00:30:57.795 [2024-11-28 08:29:54.983245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-11-28 08:29:54.983277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.795 qpair failed and we were unable to recover it. 00:30:57.795 [2024-11-28 08:29:54.983486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-11-28 08:29:54.983515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.795 qpair failed and we were unable to recover it. 00:30:57.795 [2024-11-28 08:29:54.983909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-11-28 08:29:54.983937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.795 qpair failed and we were unable to recover it. 00:30:57.795 [2024-11-28 08:29:54.984290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-11-28 08:29:54.984320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.795 qpair failed and we were unable to recover it. 00:30:57.795 [2024-11-28 08:29:54.984666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-11-28 08:29:54.984695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.795 qpair failed and we were unable to recover it. 00:30:57.795 [2024-11-28 08:29:54.985061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-11-28 08:29:54.985091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.795 qpair failed and we were unable to recover it. 00:30:57.795 [2024-11-28 08:29:54.985357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-11-28 08:29:54.985386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.795 qpair failed and we were unable to recover it. 00:30:57.795 [2024-11-28 08:29:54.985745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-11-28 08:29:54.985774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.795 qpair failed and we were unable to recover it. 00:30:57.795 [2024-11-28 08:29:54.985998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-11-28 08:29:54.986026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.795 qpair failed and we were unable to recover it. 00:30:57.795 [2024-11-28 08:29:54.986222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-11-28 08:29:54.986258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.795 qpair failed and we were unable to recover it. 00:30:57.795 [2024-11-28 08:29:54.986589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-11-28 08:29:54.986618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.795 qpair failed and we were unable to recover it. 00:30:57.795 [2024-11-28 08:29:54.986811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-11-28 08:29:54.986839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.795 qpair failed and we were unable to recover it. 00:30:57.795 [2024-11-28 08:29:54.987155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-11-28 08:29:54.987210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.795 qpair failed and we were unable to recover it. 00:30:57.795 [2024-11-28 08:29:54.987420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-11-28 08:29:54.987450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.795 qpair failed and we were unable to recover it. 00:30:57.795 08:29:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:57.795 [2024-11-28 08:29:54.987798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-11-28 08:29:54.987828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.795 qpair failed and we were unable to recover it. 00:30:57.795 08:29:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:57.795 [2024-11-28 08:29:54.988031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-11-28 08:29:54.988061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.795 qpair failed and we were unable to recover it. 00:30:57.795 08:29:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.796 [2024-11-28 08:29:54.988405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:54.988437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 08:29:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:57.796 [2024-11-28 08:29:54.988641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:54.988670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 [2024-11-28 08:29:54.988907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:54.988935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 [2024-11-28 08:29:54.989155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:54.989194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 [2024-11-28 08:29:54.989532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:54.989561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 [2024-11-28 08:29:54.989917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:54.989946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 [2024-11-28 08:29:54.990278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:54.990308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 [2024-11-28 08:29:54.990662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:54.990690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 [2024-11-28 08:29:54.991041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:54.991070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 [2024-11-28 08:29:54.991265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:54.991295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 [2024-11-28 08:29:54.991492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:54.991520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 [2024-11-28 08:29:54.991863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:54.991892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 [2024-11-28 08:29:54.992101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:54.992129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 [2024-11-28 08:29:54.992473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:54.992503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 [2024-11-28 08:29:54.992851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:54.992879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 [2024-11-28 08:29:54.993096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:54.993124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 [2024-11-28 08:29:54.993512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:54.993542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 [2024-11-28 08:29:54.993772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:54.993800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 [2024-11-28 08:29:54.994144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:54.994184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 [2024-11-28 08:29:54.994500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:54.994529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 [2024-11-28 08:29:54.994881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:54.994910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 [2024-11-28 08:29:54.995235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:54.995266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 [2024-11-28 08:29:54.995585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:54.995614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 [2024-11-28 08:29:54.995969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:54.995997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 [2024-11-28 08:29:54.996213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:54.996243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 [2024-11-28 08:29:54.996464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:54.996492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 [2024-11-28 08:29:54.996858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:54.996886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 [2024-11-28 08:29:54.997116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:54.997146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 [2024-11-28 08:29:54.997409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:54.997438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 [2024-11-28 08:29:54.997756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:54.997785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 [2024-11-28 08:29:54.997989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:54.998018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 [2024-11-28 08:29:54.998332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:54.998367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 [2024-11-28 08:29:54.998724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:54.998753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 [2024-11-28 08:29:54.999099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:54.999128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 [2024-11-28 08:29:54.999537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:54.999567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 [2024-11-28 08:29:54.999895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:54.999924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 [2024-11-28 08:29:55.000271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-11-28 08:29:55.000303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.796 qpair failed and we were unable to recover it. 00:30:57.796 [2024-11-28 08:29:55.000653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.000682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.001030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.001058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.001379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.001409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.001770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.001799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.002175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.002205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.002438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.002466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.002801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.002829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.003178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.003208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.003573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.003602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.003849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.003877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.004211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.004242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.004577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.004606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.004952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.004981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.005326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.005356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.005605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.005637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.005971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.006000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.006247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.006276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.006496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.006525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.006734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.006762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.007135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.007171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.007380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.007408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.007606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.007636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.007951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.007979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.008326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.008358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.008707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.008735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.009089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.009118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.009528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.009560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.009882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.009911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.010152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.010189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.010468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.010500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.010915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.010944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.011281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.011311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.011648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.011677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.012037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.012066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.012433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.012470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.012695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.012724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.013064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.013092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.013468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.797 [2024-11-28 08:29:55.013499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.797 qpair failed and we were unable to recover it. 00:30:57.797 [2024-11-28 08:29:55.013865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.798 [2024-11-28 08:29:55.013895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.798 qpair failed and we were unable to recover it. 00:30:57.798 [2024-11-28 08:29:55.014232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.798 [2024-11-28 08:29:55.014261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.798 qpair failed and we were unable to recover it. 00:30:57.798 [2024-11-28 08:29:55.014594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.798 [2024-11-28 08:29:55.014624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.798 qpair failed and we were unable to recover it. 00:30:57.798 [2024-11-28 08:29:55.014982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.798 [2024-11-28 08:29:55.015011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.798 qpair failed and we were unable to recover it. 00:30:57.798 [2024-11-28 08:29:55.015392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.798 [2024-11-28 08:29:55.015422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.798 qpair failed and we were unable to recover it. 00:30:57.798 [2024-11-28 08:29:55.015774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.798 [2024-11-28 08:29:55.015803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.798 qpair failed and we were unable to recover it. 00:30:57.798 [2024-11-28 08:29:55.016155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.798 [2024-11-28 08:29:55.016196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.798 qpair failed and we were unable to recover it. 00:30:57.798 [2024-11-28 08:29:55.016415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.798 [2024-11-28 08:29:55.016444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.798 qpair failed and we were unable to recover it. 00:30:57.798 [2024-11-28 08:29:55.016798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.798 [2024-11-28 08:29:55.016827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.798 Malloc0 00:30:57.798 qpair failed and we were unable to recover it. 00:30:57.798 [2024-11-28 08:29:55.017106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.798 [2024-11-28 08:29:55.017135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.798 qpair failed and we were unable to recover it. 00:30:57.798 [2024-11-28 08:29:55.017509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.798 [2024-11-28 08:29:55.017538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.798 qpair failed and we were unable to recover it. 00:30:57.798 08:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.798 [2024-11-28 08:29:55.017894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.798 [2024-11-28 08:29:55.017923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.798 qpair failed and we were unable to recover it. 00:30:57.798 08:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:57.798 [2024-11-28 08:29:55.018177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.798 [2024-11-28 08:29:55.018207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.798 qpair failed and we were unable to recover it. 00:30:57.798 08:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.798 08:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:57.798 [2024-11-28 08:29:55.018583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.798 [2024-11-28 08:29:55.018612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.798 qpair failed and we were unable to recover it. 00:30:57.798 [2024-11-28 08:29:55.018963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.798 [2024-11-28 08:29:55.018992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.798 qpair failed and we were unable to recover it. 00:30:57.798 [2024-11-28 08:29:55.019373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.798 [2024-11-28 08:29:55.019404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.798 qpair failed and we were unable to recover it. 00:30:57.798 [2024-11-28 08:29:55.019751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.798 [2024-11-28 08:29:55.019779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.798 qpair failed and we were unable to recover it. 00:30:57.798 [2024-11-28 08:29:55.020021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.798 [2024-11-28 08:29:55.020050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.798 qpair failed and we were unable to recover it. 00:30:57.798 [2024-11-28 08:29:55.020383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.798 [2024-11-28 08:29:55.020414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.798 qpair failed and we were unable to recover it. 00:30:57.798 [2024-11-28 08:29:55.020765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.798 [2024-11-28 08:29:55.020794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.798 qpair failed and we were unable to recover it. 00:30:57.798 [2024-11-28 08:29:55.021141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.798 [2024-11-28 08:29:55.021180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.798 qpair failed and we were unable to recover it. 00:30:57.798 [2024-11-28 08:29:55.021537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.798 [2024-11-28 08:29:55.021572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.798 qpair failed and we were unable to recover it. 00:30:57.798 [2024-11-28 08:29:55.021918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.798 [2024-11-28 08:29:55.021947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.798 qpair failed and we were unable to recover it. 00:30:57.798 [2024-11-28 08:29:55.022301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.798 [2024-11-28 08:29:55.022331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.798 qpair failed and we were unable to recover it. 00:30:57.798 [2024-11-28 08:29:55.022604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.798 [2024-11-28 08:29:55.022633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.798 qpair failed and we were unable to recover it. 00:30:57.798 [2024-11-28 08:29:55.022966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.798 [2024-11-28 08:29:55.022995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.798 qpair failed and we were unable to recover it. 00:30:57.798 [2024-11-28 08:29:55.023336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.798 [2024-11-28 08:29:55.023367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.798 qpair failed and we were unable to recover it. 00:30:57.798 [2024-11-28 08:29:55.023682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.798 [2024-11-28 08:29:55.023711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.798 qpair failed and we were unable to recover it. 00:30:57.798 [2024-11-28 08:29:55.024057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.798 [2024-11-28 08:29:55.024086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.798 qpair failed and we were unable to recover it. 00:30:57.798 [2024-11-28 08:29:55.024257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.798 [2024-11-28 08:29:55.024287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.798 qpair failed and we were unable to recover it. 00:30:57.798 [2024-11-28 08:29:55.024363] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:57.798 [2024-11-28 08:29:55.024620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.798 [2024-11-28 08:29:55.024648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.798 qpair failed and we were unable to recover it. 00:30:57.798 [2024-11-28 08:29:55.024871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.798 [2024-11-28 08:29:55.024900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.798 qpair failed and we were unable to recover it. 00:30:57.798 [2024-11-28 08:29:55.025221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.798 [2024-11-28 08:29:55.025251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.798 qpair failed and we were unable to recover it. 00:30:57.798 [2024-11-28 08:29:55.025474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.798 [2024-11-28 08:29:55.025502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.798 qpair failed and we were unable to recover it. 00:30:57.798 [2024-11-28 08:29:55.025731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.798 [2024-11-28 08:29:55.025766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.798 qpair failed and we were unable to recover it. 00:30:57.798 [2024-11-28 08:29:55.026035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.799 [2024-11-28 08:29:55.026064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.799 qpair failed and we were unable to recover it. 00:30:57.799 [2024-11-28 08:29:55.026435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.799 [2024-11-28 08:29:55.026464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.799 qpair failed and we were unable to recover it. 00:30:57.799 [2024-11-28 08:29:55.026818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.799 [2024-11-28 08:29:55.026847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.799 qpair failed and we were unable to recover it. 00:30:57.799 [2024-11-28 08:29:55.027179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.799 [2024-11-28 08:29:55.027211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.799 qpair failed and we were unable to recover it. 00:30:57.799 [2024-11-28 08:29:55.027570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.799 [2024-11-28 08:29:55.027599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.799 qpair failed and we were unable to recover it. 00:30:57.799 [2024-11-28 08:29:55.027972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.799 [2024-11-28 08:29:55.028002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.799 qpair failed and we were unable to recover it. 00:30:57.799 [2024-11-28 08:29:55.028217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.799 [2024-11-28 08:29:55.028250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.799 qpair failed and we were unable to recover it. 00:30:57.799 [2024-11-28 08:29:55.028606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.799 [2024-11-28 08:29:55.028635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.799 qpair failed and we were unable to recover it. 00:30:57.799 [2024-11-28 08:29:55.028980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.799 [2024-11-28 08:29:55.029010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.799 qpair failed and we were unable to recover it. 00:30:57.799 [2024-11-28 08:29:55.029312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.799 [2024-11-28 08:29:55.029342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.799 qpair failed and we were unable to recover it. 00:30:57.799 [2024-11-28 08:29:55.029681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.799 [2024-11-28 08:29:55.029710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.799 qpair failed and we were unable to recover it. 00:30:57.799 [2024-11-28 08:29:55.030067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.799 [2024-11-28 08:29:55.030097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.799 qpair failed and we were unable to recover it. 00:30:57.799 [2024-11-28 08:29:55.030313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.799 [2024-11-28 08:29:55.030343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.799 qpair failed and we were unable to recover it. 00:30:57.799 [2024-11-28 08:29:55.030704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.799 [2024-11-28 08:29:55.030734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.799 qpair failed and we were unable to recover it. 00:30:57.799 [2024-11-28 08:29:55.031074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.799 [2024-11-28 08:29:55.031103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.799 qpair failed and we were unable to recover it. 00:30:57.799 [2024-11-28 08:29:55.031357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.799 [2024-11-28 08:29:55.031387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.799 qpair failed and we were unable to recover it. 00:30:57.799 [2024-11-28 08:29:55.031712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.799 [2024-11-28 08:29:55.031741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.799 qpair failed and we were unable to recover it. 00:30:57.799 [2024-11-28 08:29:55.031995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.799 [2024-11-28 08:29:55.032027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.799 qpair failed and we were unable to recover it. 00:30:57.799 [2024-11-28 08:29:55.032253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.799 [2024-11-28 08:29:55.032283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.799 qpair failed and we were unable to recover it. 00:30:57.799 [2024-11-28 08:29:55.032629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.799 [2024-11-28 08:29:55.032658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.799 qpair failed and we were unable to recover it. 00:30:57.799 [2024-11-28 08:29:55.032977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.799 [2024-11-28 08:29:55.033006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.799 qpair failed and we were unable to recover it. 00:30:57.799 [2024-11-28 08:29:55.033375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.799 [2024-11-28 08:29:55.033405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.799 qpair failed and we were unable to recover it. 00:30:57.799 08:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.799 [2024-11-28 08:29:55.033764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.799 [2024-11-28 08:29:55.033793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.799 qpair failed and we were unable to recover it. 00:30:57.799 08:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:57.799 [2024-11-28 08:29:55.034013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.799 [2024-11-28 08:29:55.034041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.799 qpair failed and we were unable to recover it. 00:30:57.799 08:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.799 [2024-11-28 08:29:55.034357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.799 [2024-11-28 08:29:55.034387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.799 08:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:57.799 qpair failed and we were unable to recover it. 00:30:57.799 [2024-11-28 08:29:55.034734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.799 [2024-11-28 08:29:55.034764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.799 qpair failed and we were unable to recover it. 00:30:57.799 [2024-11-28 08:29:55.035133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.799 [2024-11-28 08:29:55.035170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.799 qpair failed and we were unable to recover it. 00:30:57.799 [2024-11-28 08:29:55.035396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.799 [2024-11-28 08:29:55.035425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.799 qpair failed and we were unable to recover it. 00:30:57.799 [2024-11-28 08:29:55.035779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.799 [2024-11-28 08:29:55.035808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.799 qpair failed and we were unable to recover it. 00:30:57.799 [2024-11-28 08:29:55.036152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.799 [2024-11-28 08:29:55.036201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.799 qpair failed and we were unable to recover it. 00:30:57.799 [2024-11-28 08:29:55.036459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.799 [2024-11-28 08:29:55.036489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.799 qpair failed and we were unable to recover it. 00:30:57.800 [2024-11-28 08:29:55.036847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.800 [2024-11-28 08:29:55.036876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.800 qpair failed and we were unable to recover it. 00:30:57.800 [2024-11-28 08:29:55.037230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.800 [2024-11-28 08:29:55.037260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.800 qpair failed and we were unable to recover it. 00:30:57.800 [2024-11-28 08:29:55.037616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.800 [2024-11-28 08:29:55.037645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.800 qpair failed and we were unable to recover it. 00:30:57.800 [2024-11-28 08:29:55.037990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.800 [2024-11-28 08:29:55.038019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.800 qpair failed and we were unable to recover it. 00:30:57.800 [2024-11-28 08:29:55.038384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.800 [2024-11-28 08:29:55.038413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.800 qpair failed and we were unable to recover it. 00:30:57.800 [2024-11-28 08:29:55.038743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.800 [2024-11-28 08:29:55.038773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.800 qpair failed and we were unable to recover it. 00:30:57.800 [2024-11-28 08:29:55.039121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.800 [2024-11-28 08:29:55.039168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.800 qpair failed and we were unable to recover it. 00:30:57.800 [2024-11-28 08:29:55.039522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.800 [2024-11-28 08:29:55.039552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.800 qpair failed and we were unable to recover it. 00:30:57.800 [2024-11-28 08:29:55.039901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.800 [2024-11-28 08:29:55.039929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.800 qpair failed and we were unable to recover it. 00:30:57.800 [2024-11-28 08:29:55.040291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.800 [2024-11-28 08:29:55.040322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.800 qpair failed and we were unable to recover it. 00:30:57.800 [2024-11-28 08:29:55.040682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.800 [2024-11-28 08:29:55.040710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.800 qpair failed and we were unable to recover it. 00:30:57.800 [2024-11-28 08:29:55.040933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.800 [2024-11-28 08:29:55.040962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.800 qpair failed and we were unable to recover it. 00:30:57.800 [2024-11-28 08:29:55.041317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.800 [2024-11-28 08:29:55.041348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.800 qpair failed and we were unable to recover it. 00:30:57.800 [2024-11-28 08:29:55.041693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.800 [2024-11-28 08:29:55.041722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.800 qpair failed and we were unable to recover it. 00:30:57.800 [2024-11-28 08:29:55.042073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.800 [2024-11-28 08:29:55.042103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.800 qpair failed and we were unable to recover it. 00:30:57.800 [2024-11-28 08:29:55.042449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.800 [2024-11-28 08:29:55.042480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.800 qpair failed and we were unable to recover it. 00:30:57.800 [2024-11-28 08:29:55.042852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.800 [2024-11-28 08:29:55.042881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.800 qpair failed and we were unable to recover it. 00:30:57.800 [2024-11-28 08:29:55.043225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.800 [2024-11-28 08:29:55.043255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:57.800 qpair failed and we were unable to recover it. 00:30:58.065 [2024-11-28 08:29:55.043535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.065 [2024-11-28 08:29:55.043564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.065 qpair failed and we were unable to recover it. 00:30:58.065 [2024-11-28 08:29:55.043889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.065 [2024-11-28 08:29:55.043918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.065 qpair failed and we were unable to recover it. 00:30:58.065 [2024-11-28 08:29:55.044265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.065 [2024-11-28 08:29:55.044294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.065 qpair failed and we were unable to recover it. 00:30:58.065 [2024-11-28 08:29:55.044731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.065 [2024-11-28 08:29:55.044760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.065 qpair failed and we were unable to recover it. 00:30:58.065 [2024-11-28 08:29:55.045015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.065 [2024-11-28 08:29:55.045043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.065 qpair failed and we were unable to recover it. 00:30:58.065 [2024-11-28 08:29:55.045398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.065 [2024-11-28 08:29:55.045429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.065 qpair failed and we were unable to recover it. 00:30:58.065 08:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.065 [2024-11-28 08:29:55.045785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.065 [2024-11-28 08:29:55.045814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.065 qpair failed and we were unable to recover it. 00:30:58.065 08:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:58.065 08:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.065 [2024-11-28 08:29:55.046170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.065 [2024-11-28 08:29:55.046200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.065 qpair failed and we were unable to recover it. 00:30:58.065 08:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:58.065 [2024-11-28 08:29:55.046517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.065 [2024-11-28 08:29:55.046546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.065 qpair failed and we were unable to recover it. 00:30:58.065 [2024-11-28 08:29:55.046806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.065 [2024-11-28 08:29:55.046833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.065 qpair failed and we were unable to recover it. 00:30:58.065 [2024-11-28 08:29:55.047082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.065 [2024-11-28 08:29:55.047110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.065 qpair failed and we were unable to recover it. 00:30:58.065 [2024-11-28 08:29:55.047483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.065 [2024-11-28 08:29:55.047513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.065 qpair failed and we were unable to recover it. 00:30:58.065 [2024-11-28 08:29:55.047850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.065 [2024-11-28 08:29:55.047879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.065 qpair failed and we were unable to recover it. 00:30:58.065 [2024-11-28 08:29:55.048258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.065 [2024-11-28 08:29:55.048294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.065 qpair failed and we were unable to recover it. 00:30:58.065 [2024-11-28 08:29:55.048648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.065 [2024-11-28 08:29:55.048676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.065 qpair failed and we were unable to recover it. 00:30:58.065 [2024-11-28 08:29:55.049016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.065 [2024-11-28 08:29:55.049046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.065 qpair failed and we were unable to recover it. 00:30:58.065 [2024-11-28 08:29:55.049386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.065 [2024-11-28 08:29:55.049416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.065 qpair failed and we were unable to recover it. 00:30:58.065 [2024-11-28 08:29:55.049747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.065 [2024-11-28 08:29:55.049776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.065 qpair failed and we were unable to recover it. 00:30:58.065 [2024-11-28 08:29:55.050145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.065 [2024-11-28 08:29:55.050183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.065 qpair failed and we were unable to recover it. 00:30:58.065 [2024-11-28 08:29:55.050535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.065 [2024-11-28 08:29:55.050563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.065 qpair failed and we were unable to recover it. 00:30:58.065 [2024-11-28 08:29:55.050913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.065 [2024-11-28 08:29:55.050943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.065 qpair failed and we were unable to recover it. 00:30:58.065 [2024-11-28 08:29:55.051297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.065 [2024-11-28 08:29:55.051327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.065 qpair failed and we were unable to recover it. 00:30:58.065 [2024-11-28 08:29:55.051650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.065 [2024-11-28 08:29:55.051678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.065 qpair failed and we were unable to recover it. 00:30:58.065 [2024-11-28 08:29:55.051907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.065 [2024-11-28 08:29:55.051938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.065 qpair failed and we were unable to recover it. 00:30:58.065 [2024-11-28 08:29:55.052279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.065 [2024-11-28 08:29:55.052310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.065 qpair failed and we were unable to recover it. 00:30:58.065 [2024-11-28 08:29:55.052664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.065 [2024-11-28 08:29:55.052693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.065 qpair failed and we were unable to recover it. 00:30:58.065 [2024-11-28 08:29:55.053032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.065 [2024-11-28 08:29:55.053061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.065 qpair failed and we were unable to recover it. 00:30:58.065 [2024-11-28 08:29:55.053411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.065 [2024-11-28 08:29:55.053443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.065 qpair failed and we were unable to recover it. 00:30:58.066 [2024-11-28 08:29:55.053789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.066 [2024-11-28 08:29:55.053818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.066 qpair failed and we were unable to recover it. 00:30:58.066 [2024-11-28 08:29:55.054170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.066 [2024-11-28 08:29:55.054200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.066 qpair failed and we were unable to recover it. 00:30:58.066 [2024-11-28 08:29:55.054556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.066 [2024-11-28 08:29:55.054585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.066 qpair failed and we were unable to recover it. 00:30:58.066 [2024-11-28 08:29:55.054942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.066 [2024-11-28 08:29:55.054971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.066 qpair failed and we were unable to recover it. 00:30:58.066 [2024-11-28 08:29:55.055318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.066 [2024-11-28 08:29:55.055349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.066 qpair failed and we were unable to recover it. 00:30:58.066 [2024-11-28 08:29:55.055731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.066 [2024-11-28 08:29:55.055760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.066 qpair failed and we were unable to recover it. 00:30:58.066 [2024-11-28 08:29:55.056105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.066 [2024-11-28 08:29:55.056134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.066 qpair failed and we were unable to recover it. 00:30:58.066 [2024-11-28 08:29:55.056549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.066 [2024-11-28 08:29:55.056578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.066 qpair failed and we were unable to recover it. 00:30:58.066 [2024-11-28 08:29:55.056925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.066 [2024-11-28 08:29:55.056953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.066 qpair failed and we were unable to recover it. 00:30:58.066 [2024-11-28 08:29:55.057290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.066 [2024-11-28 08:29:55.057321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.066 qpair failed and we were unable to recover it. 00:30:58.066 08:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.066 [2024-11-28 08:29:55.057637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.066 [2024-11-28 08:29:55.057667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.066 qpair failed and we were unable to recover it. 00:30:58.066 08:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:58.066 [2024-11-28 08:29:55.058005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.066 [2024-11-28 08:29:55.058034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.066 qpair failed and we were unable to recover it. 00:30:58.066 08:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.066 [2024-11-28 08:29:55.058284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.066 [2024-11-28 08:29:55.058316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.066 qpair failed and we were unable to recover it. 00:30:58.066 08:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:58.066 [2024-11-28 08:29:55.058639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.066 [2024-11-28 08:29:55.058668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.066 qpair failed and we were unable to recover it. 00:30:58.066 [2024-11-28 08:29:55.059020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.066 [2024-11-28 08:29:55.059049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.066 qpair failed and we were unable to recover it. 00:30:58.066 [2024-11-28 08:29:55.059270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.066 [2024-11-28 08:29:55.059299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.066 qpair failed and we were unable to recover it. 00:30:58.066 [2024-11-28 08:29:55.059656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.066 [2024-11-28 08:29:55.059685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.066 qpair failed and we were unable to recover it. 00:30:58.066 [2024-11-28 08:29:55.059827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.066 [2024-11-28 08:29:55.059859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.066 qpair failed and we were unable to recover it. 00:30:58.066 [2024-11-28 08:29:55.060095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.066 [2024-11-28 08:29:55.060125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.066 qpair failed and we were unable to recover it. 00:30:58.066 [2024-11-28 08:29:55.060491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.066 [2024-11-28 08:29:55.060521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.066 qpair failed and we were unable to recover it. 00:30:58.066 [2024-11-28 08:29:55.060851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.066 [2024-11-28 08:29:55.060880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.066 qpair failed and we were unable to recover it. 00:30:58.066 [2024-11-28 08:29:55.061152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.066 [2024-11-28 08:29:55.061200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.066 qpair failed and we were unable to recover it. 00:30:58.066 [2024-11-28 08:29:55.061547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.066 [2024-11-28 08:29:55.061576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.066 qpair failed and we were unable to recover it. 00:30:58.066 [2024-11-28 08:29:55.061787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.066 [2024-11-28 08:29:55.061822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.066 qpair failed and we were unable to recover it. 00:30:58.066 [2024-11-28 08:29:55.062156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.066 [2024-11-28 08:29:55.062196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.066 qpair failed and we were unable to recover it. 00:30:58.066 [2024-11-28 08:29:55.062562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.066 [2024-11-28 08:29:55.062591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.066 qpair failed and we were unable to recover it. 00:30:58.066 [2024-11-28 08:29:55.062928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.066 [2024-11-28 08:29:55.062957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.066 qpair failed and we were unable to recover it. 00:30:58.066 [2024-11-28 08:29:55.063319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.066 [2024-11-28 08:29:55.063349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.066 qpair failed and we were unable to recover it. 00:30:58.066 [2024-11-28 08:29:55.063699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.066 [2024-11-28 08:29:55.063727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.066 qpair failed and we were unable to recover it. 00:30:58.066 [2024-11-28 08:29:55.064068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.066 [2024-11-28 08:29:55.064097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.066 qpair failed and we were unable to recover it. 00:30:58.066 [2024-11-28 08:29:55.064489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.066 [2024-11-28 08:29:55.064519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f68c0000b90 with addr=10.0.0.2, port=4420 00:30:58.066 qpair failed and we were unable to recover it. 00:30:58.066 [2024-11-28 08:29:55.064627] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:58.066 08:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.066 08:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:58.066 08:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.066 08:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:58.066 [2024-11-28 08:29:55.075319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.066 [2024-11-28 08:29:55.075421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.066 [2024-11-28 08:29:55.075465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.066 [2024-11-28 08:29:55.075488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.066 [2024-11-28 08:29:55.075508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.067 [2024-11-28 08:29:55.075560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.067 qpair failed and we were unable to recover it. 00:30:58.067 08:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.067 08:29:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2161200 00:30:58.067 [2024-11-28 08:29:55.085273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.067 [2024-11-28 08:29:55.085358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.067 [2024-11-28 08:29:55.085385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.067 [2024-11-28 08:29:55.085399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.067 [2024-11-28 08:29:55.085412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.067 [2024-11-28 08:29:55.085441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.067 qpair failed and we were unable to recover it. 00:30:58.067 [2024-11-28 08:29:55.095175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.067 [2024-11-28 08:29:55.095241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.067 [2024-11-28 08:29:55.095262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.067 [2024-11-28 08:29:55.095274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.067 [2024-11-28 08:29:55.095283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.067 [2024-11-28 08:29:55.095305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.067 qpair failed and we were unable to recover it. 00:30:58.067 [2024-11-28 08:29:55.105156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.067 [2024-11-28 08:29:55.105229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.067 [2024-11-28 08:29:55.105243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.067 [2024-11-28 08:29:55.105250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.067 [2024-11-28 08:29:55.105257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.067 [2024-11-28 08:29:55.105272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.067 qpair failed and we were unable to recover it. 00:30:58.067 [2024-11-28 08:29:55.115121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.067 [2024-11-28 08:29:55.115177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.067 [2024-11-28 08:29:55.115191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.067 [2024-11-28 08:29:55.115199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.067 [2024-11-28 08:29:55.115205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.067 [2024-11-28 08:29:55.115220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.067 qpair failed and we were unable to recover it. 00:30:58.067 [2024-11-28 08:29:55.125217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.067 [2024-11-28 08:29:55.125272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.067 [2024-11-28 08:29:55.125286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.067 [2024-11-28 08:29:55.125293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.067 [2024-11-28 08:29:55.125299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.067 [2024-11-28 08:29:55.125313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.067 qpair failed and we were unable to recover it. 00:30:58.067 [2024-11-28 08:29:55.135221] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.067 [2024-11-28 08:29:55.135269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.067 [2024-11-28 08:29:55.135282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.067 [2024-11-28 08:29:55.135289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.067 [2024-11-28 08:29:55.135296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.067 [2024-11-28 08:29:55.135310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.067 qpair failed and we were unable to recover it. 00:30:58.067 [2024-11-28 08:29:55.145276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.067 [2024-11-28 08:29:55.145334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.067 [2024-11-28 08:29:55.145347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.067 [2024-11-28 08:29:55.145354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.067 [2024-11-28 08:29:55.145360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.067 [2024-11-28 08:29:55.145374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.067 qpair failed and we were unable to recover it. 00:30:58.067 [2024-11-28 08:29:55.155335] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.067 [2024-11-28 08:29:55.155390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.067 [2024-11-28 08:29:55.155403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.067 [2024-11-28 08:29:55.155410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.067 [2024-11-28 08:29:55.155417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.067 [2024-11-28 08:29:55.155431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.067 qpair failed and we were unable to recover it. 00:30:58.067 [2024-11-28 08:29:55.165225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.067 [2024-11-28 08:29:55.165283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.067 [2024-11-28 08:29:55.165297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.067 [2024-11-28 08:29:55.165307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.067 [2024-11-28 08:29:55.165313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.067 [2024-11-28 08:29:55.165328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.067 qpair failed and we were unable to recover it. 00:30:58.067 [2024-11-28 08:29:55.175324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.067 [2024-11-28 08:29:55.175372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.067 [2024-11-28 08:29:55.175385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.067 [2024-11-28 08:29:55.175392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.067 [2024-11-28 08:29:55.175399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.067 [2024-11-28 08:29:55.175413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.067 qpair failed and we were unable to recover it. 00:30:58.067 [2024-11-28 08:29:55.185397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.067 [2024-11-28 08:29:55.185454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.067 [2024-11-28 08:29:55.185468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.067 [2024-11-28 08:29:55.185475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.067 [2024-11-28 08:29:55.185482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.067 [2024-11-28 08:29:55.185495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.067 qpair failed and we were unable to recover it. 00:30:58.067 [2024-11-28 08:29:55.195400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.067 [2024-11-28 08:29:55.195468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.067 [2024-11-28 08:29:55.195481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.067 [2024-11-28 08:29:55.195488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.067 [2024-11-28 08:29:55.195495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.067 [2024-11-28 08:29:55.195509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.067 qpair failed and we were unable to recover it. 00:30:58.067 [2024-11-28 08:29:55.205446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.067 [2024-11-28 08:29:55.205536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.067 [2024-11-28 08:29:55.205549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.067 [2024-11-28 08:29:55.205556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.068 [2024-11-28 08:29:55.205562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.068 [2024-11-28 08:29:55.205580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.068 qpair failed and we were unable to recover it. 00:30:58.068 [2024-11-28 08:29:55.215432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.068 [2024-11-28 08:29:55.215476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.068 [2024-11-28 08:29:55.215489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.068 [2024-11-28 08:29:55.215496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.068 [2024-11-28 08:29:55.215502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.068 [2024-11-28 08:29:55.215516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.068 qpair failed and we were unable to recover it. 00:30:58.068 [2024-11-28 08:29:55.225528] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.068 [2024-11-28 08:29:55.225584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.068 [2024-11-28 08:29:55.225597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.068 [2024-11-28 08:29:55.225604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.068 [2024-11-28 08:29:55.225611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.068 [2024-11-28 08:29:55.225625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.068 qpair failed and we were unable to recover it. 00:30:58.068 [2024-11-28 08:29:55.235536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.068 [2024-11-28 08:29:55.235591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.068 [2024-11-28 08:29:55.235604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.068 [2024-11-28 08:29:55.235611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.068 [2024-11-28 08:29:55.235617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.068 [2024-11-28 08:29:55.235631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.068 qpair failed and we were unable to recover it. 00:30:58.068 [2024-11-28 08:29:55.245583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.068 [2024-11-28 08:29:55.245685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.068 [2024-11-28 08:29:55.245698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.068 [2024-11-28 08:29:55.245705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.068 [2024-11-28 08:29:55.245711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.068 [2024-11-28 08:29:55.245726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.068 qpair failed and we were unable to recover it. 00:30:58.068 [2024-11-28 08:29:55.255542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.068 [2024-11-28 08:29:55.255595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.068 [2024-11-28 08:29:55.255608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.068 [2024-11-28 08:29:55.255616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.068 [2024-11-28 08:29:55.255622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.068 [2024-11-28 08:29:55.255636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.068 qpair failed and we were unable to recover it. 00:30:58.068 [2024-11-28 08:29:55.265614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.068 [2024-11-28 08:29:55.265671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.068 [2024-11-28 08:29:55.265684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.068 [2024-11-28 08:29:55.265691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.068 [2024-11-28 08:29:55.265697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.068 [2024-11-28 08:29:55.265711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.068 qpair failed and we were unable to recover it. 00:30:58.068 [2024-11-28 08:29:55.275662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.068 [2024-11-28 08:29:55.275765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.068 [2024-11-28 08:29:55.275778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.068 [2024-11-28 08:29:55.275785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.068 [2024-11-28 08:29:55.275791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.068 [2024-11-28 08:29:55.275805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.068 qpair failed and we were unable to recover it. 00:30:58.068 [2024-11-28 08:29:55.285673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.068 [2024-11-28 08:29:55.285726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.068 [2024-11-28 08:29:55.285740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.068 [2024-11-28 08:29:55.285747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.068 [2024-11-28 08:29:55.285753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.068 [2024-11-28 08:29:55.285767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.068 qpair failed and we were unable to recover it. 00:30:58.068 [2024-11-28 08:29:55.295660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.068 [2024-11-28 08:29:55.295706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.068 [2024-11-28 08:29:55.295719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.068 [2024-11-28 08:29:55.295729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.068 [2024-11-28 08:29:55.295735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.068 [2024-11-28 08:29:55.295750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.068 qpair failed and we were unable to recover it. 00:30:58.068 [2024-11-28 08:29:55.305847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.068 [2024-11-28 08:29:55.305934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.068 [2024-11-28 08:29:55.305947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.068 [2024-11-28 08:29:55.305953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.068 [2024-11-28 08:29:55.305960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.068 [2024-11-28 08:29:55.305973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.068 qpair failed and we were unable to recover it. 00:30:58.068 [2024-11-28 08:29:55.315814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.068 [2024-11-28 08:29:55.315868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.068 [2024-11-28 08:29:55.315882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.068 [2024-11-28 08:29:55.315889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.068 [2024-11-28 08:29:55.315895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.068 [2024-11-28 08:29:55.315909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.068 qpair failed and we were unable to recover it. 00:30:58.068 [2024-11-28 08:29:55.325818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.068 [2024-11-28 08:29:55.325865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.068 [2024-11-28 08:29:55.325878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.068 [2024-11-28 08:29:55.325886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.068 [2024-11-28 08:29:55.325893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.068 [2024-11-28 08:29:55.325907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.068 qpair failed and we were unable to recover it. 00:30:58.068 [2024-11-28 08:29:55.335812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.068 [2024-11-28 08:29:55.335872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.068 [2024-11-28 08:29:55.335885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.068 [2024-11-28 08:29:55.335892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.068 [2024-11-28 08:29:55.335898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.069 [2024-11-28 08:29:55.335916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.069 qpair failed and we were unable to recover it. 00:30:58.069 [2024-11-28 08:29:55.345823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.069 [2024-11-28 08:29:55.345895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.069 [2024-11-28 08:29:55.345920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.069 [2024-11-28 08:29:55.345928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.069 [2024-11-28 08:29:55.345935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.069 [2024-11-28 08:29:55.345955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.069 qpair failed and we were unable to recover it. 00:30:58.332 [2024-11-28 08:29:55.355754] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.332 [2024-11-28 08:29:55.355835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.332 [2024-11-28 08:29:55.355850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.332 [2024-11-28 08:29:55.355858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.332 [2024-11-28 08:29:55.355865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.332 [2024-11-28 08:29:55.355880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.332 qpair failed and we were unable to recover it. 00:30:58.332 [2024-11-28 08:29:55.365860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.332 [2024-11-28 08:29:55.365915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.332 [2024-11-28 08:29:55.365929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.332 [2024-11-28 08:29:55.365936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.332 [2024-11-28 08:29:55.365943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.332 [2024-11-28 08:29:55.365957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.332 qpair failed and we were unable to recover it. 00:30:58.332 [2024-11-28 08:29:55.375874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.332 [2024-11-28 08:29:55.375922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.332 [2024-11-28 08:29:55.375936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.332 [2024-11-28 08:29:55.375943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.332 [2024-11-28 08:29:55.375949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.332 [2024-11-28 08:29:55.375964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.332 qpair failed and we were unable to recover it. 00:30:58.332 [2024-11-28 08:29:55.385962] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.332 [2024-11-28 08:29:55.386018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.332 [2024-11-28 08:29:55.386032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.332 [2024-11-28 08:29:55.386039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.332 [2024-11-28 08:29:55.386045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.332 [2024-11-28 08:29:55.386059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.332 qpair failed and we were unable to recover it. 00:30:58.332 [2024-11-28 08:29:55.395997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.332 [2024-11-28 08:29:55.396049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.332 [2024-11-28 08:29:55.396063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.332 [2024-11-28 08:29:55.396070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.332 [2024-11-28 08:29:55.396076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.332 [2024-11-28 08:29:55.396090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.332 qpair failed and we were unable to recover it. 00:30:58.332 [2024-11-28 08:29:55.406031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.332 [2024-11-28 08:29:55.406133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.332 [2024-11-28 08:29:55.406146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.332 [2024-11-28 08:29:55.406154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.332 [2024-11-28 08:29:55.406164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.332 [2024-11-28 08:29:55.406179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.332 qpair failed and we were unable to recover it. 00:30:58.332 [2024-11-28 08:29:55.415987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.332 [2024-11-28 08:29:55.416032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.332 [2024-11-28 08:29:55.416045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.332 [2024-11-28 08:29:55.416053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.332 [2024-11-28 08:29:55.416059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.332 [2024-11-28 08:29:55.416073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.332 qpair failed and we were unable to recover it. 00:30:58.332 [2024-11-28 08:29:55.426034] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.333 [2024-11-28 08:29:55.426087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.333 [2024-11-28 08:29:55.426104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.333 [2024-11-28 08:29:55.426111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.333 [2024-11-28 08:29:55.426117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.333 [2024-11-28 08:29:55.426131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.333 qpair failed and we were unable to recover it. 00:30:58.333 [2024-11-28 08:29:55.436119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.333 [2024-11-28 08:29:55.436173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.333 [2024-11-28 08:29:55.436188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.333 [2024-11-28 08:29:55.436194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.333 [2024-11-28 08:29:55.436201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.333 [2024-11-28 08:29:55.436215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.333 qpair failed and we were unable to recover it. 00:30:58.333 [2024-11-28 08:29:55.446118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.333 [2024-11-28 08:29:55.446171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.333 [2024-11-28 08:29:55.446184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.333 [2024-11-28 08:29:55.446191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.333 [2024-11-28 08:29:55.446197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.333 [2024-11-28 08:29:55.446211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.333 qpair failed and we were unable to recover it. 00:30:58.333 [2024-11-28 08:29:55.456095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.333 [2024-11-28 08:29:55.456169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.333 [2024-11-28 08:29:55.456183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.333 [2024-11-28 08:29:55.456190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.333 [2024-11-28 08:29:55.456196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.333 [2024-11-28 08:29:55.456210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.333 qpair failed and we were unable to recover it. 00:30:58.333 [2024-11-28 08:29:55.466054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.333 [2024-11-28 08:29:55.466111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.333 [2024-11-28 08:29:55.466125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.333 [2024-11-28 08:29:55.466132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.333 [2024-11-28 08:29:55.466141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.333 [2024-11-28 08:29:55.466156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.333 qpair failed and we were unable to recover it. 00:30:58.333 [2024-11-28 08:29:55.476199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.333 [2024-11-28 08:29:55.476252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.333 [2024-11-28 08:29:55.476266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.333 [2024-11-28 08:29:55.476272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.333 [2024-11-28 08:29:55.476279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.333 [2024-11-28 08:29:55.476293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.333 qpair failed and we were unable to recover it. 00:30:58.333 [2024-11-28 08:29:55.486225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.333 [2024-11-28 08:29:55.486275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.333 [2024-11-28 08:29:55.486289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.333 [2024-11-28 08:29:55.486296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.333 [2024-11-28 08:29:55.486302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.333 [2024-11-28 08:29:55.486316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.333 qpair failed and we were unable to recover it. 00:30:58.333 [2024-11-28 08:29:55.496226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.333 [2024-11-28 08:29:55.496271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.333 [2024-11-28 08:29:55.496284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.333 [2024-11-28 08:29:55.496291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.333 [2024-11-28 08:29:55.496297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.333 [2024-11-28 08:29:55.496311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.333 qpair failed and we were unable to recover it. 00:30:58.333 [2024-11-28 08:29:55.506301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.333 [2024-11-28 08:29:55.506356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.333 [2024-11-28 08:29:55.506370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.333 [2024-11-28 08:29:55.506377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.333 [2024-11-28 08:29:55.506383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.333 [2024-11-28 08:29:55.506397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.333 qpair failed and we were unable to recover it. 00:30:58.333 [2024-11-28 08:29:55.516233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.333 [2024-11-28 08:29:55.516330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.333 [2024-11-28 08:29:55.516344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.333 [2024-11-28 08:29:55.516351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.333 [2024-11-28 08:29:55.516357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.333 [2024-11-28 08:29:55.516371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.333 qpair failed and we were unable to recover it. 00:30:58.333 [2024-11-28 08:29:55.526330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.333 [2024-11-28 08:29:55.526381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.333 [2024-11-28 08:29:55.526394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.333 [2024-11-28 08:29:55.526401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.333 [2024-11-28 08:29:55.526407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.333 [2024-11-28 08:29:55.526421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.333 qpair failed and we were unable to recover it. 00:30:58.333 [2024-11-28 08:29:55.536342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.333 [2024-11-28 08:29:55.536391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.333 [2024-11-28 08:29:55.536404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.333 [2024-11-28 08:29:55.536411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.333 [2024-11-28 08:29:55.536417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.333 [2024-11-28 08:29:55.536431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.333 qpair failed and we were unable to recover it. 00:30:58.333 [2024-11-28 08:29:55.546297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.333 [2024-11-28 08:29:55.546350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.333 [2024-11-28 08:29:55.546364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.333 [2024-11-28 08:29:55.546371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.333 [2024-11-28 08:29:55.546377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.333 [2024-11-28 08:29:55.546391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.333 qpair failed and we were unable to recover it. 00:30:58.333 [2024-11-28 08:29:55.556460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.333 [2024-11-28 08:29:55.556516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.333 [2024-11-28 08:29:55.556533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.333 [2024-11-28 08:29:55.556541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.333 [2024-11-28 08:29:55.556547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.333 [2024-11-28 08:29:55.556561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.333 qpair failed and we were unable to recover it. 00:30:58.333 [2024-11-28 08:29:55.566455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.333 [2024-11-28 08:29:55.566508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.333 [2024-11-28 08:29:55.566521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.333 [2024-11-28 08:29:55.566528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.333 [2024-11-28 08:29:55.566534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.333 [2024-11-28 08:29:55.566548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.333 qpair failed and we were unable to recover it. 00:30:58.333 [2024-11-28 08:29:55.576454] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.333 [2024-11-28 08:29:55.576501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.333 [2024-11-28 08:29:55.576514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.333 [2024-11-28 08:29:55.576520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.333 [2024-11-28 08:29:55.576527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.334 [2024-11-28 08:29:55.576541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.334 qpair failed and we were unable to recover it. 00:30:58.334 [2024-11-28 08:29:55.586491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.334 [2024-11-28 08:29:55.586549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.334 [2024-11-28 08:29:55.586563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.334 [2024-11-28 08:29:55.586570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.334 [2024-11-28 08:29:55.586576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.334 [2024-11-28 08:29:55.586590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.334 qpair failed and we were unable to recover it. 00:30:58.334 [2024-11-28 08:29:55.596562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.334 [2024-11-28 08:29:55.596616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.334 [2024-11-28 08:29:55.596629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.334 [2024-11-28 08:29:55.596636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.334 [2024-11-28 08:29:55.596645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.334 [2024-11-28 08:29:55.596659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.334 qpair failed and we were unable to recover it. 00:30:58.334 [2024-11-28 08:29:55.606579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.334 [2024-11-28 08:29:55.606636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.334 [2024-11-28 08:29:55.606649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.334 [2024-11-28 08:29:55.606656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.334 [2024-11-28 08:29:55.606662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.334 [2024-11-28 08:29:55.606676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.334 qpair failed and we were unable to recover it. 00:30:58.334 [2024-11-28 08:29:55.616565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.334 [2024-11-28 08:29:55.616608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.334 [2024-11-28 08:29:55.616622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.334 [2024-11-28 08:29:55.616629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.334 [2024-11-28 08:29:55.616635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.334 [2024-11-28 08:29:55.616648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.334 qpair failed and we were unable to recover it. 00:30:58.598 [2024-11-28 08:29:55.626610] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.598 [2024-11-28 08:29:55.626663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.598 [2024-11-28 08:29:55.626675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.598 [2024-11-28 08:29:55.626683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.598 [2024-11-28 08:29:55.626689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.598 [2024-11-28 08:29:55.626703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.598 qpair failed and we were unable to recover it. 00:30:58.598 [2024-11-28 08:29:55.636681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.598 [2024-11-28 08:29:55.636736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.598 [2024-11-28 08:29:55.636749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.598 [2024-11-28 08:29:55.636756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.598 [2024-11-28 08:29:55.636762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.598 [2024-11-28 08:29:55.636776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.598 qpair failed and we were unable to recover it. 00:30:58.598 [2024-11-28 08:29:55.646684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.598 [2024-11-28 08:29:55.646735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.598 [2024-11-28 08:29:55.646749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.598 [2024-11-28 08:29:55.646756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.598 [2024-11-28 08:29:55.646762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.598 [2024-11-28 08:29:55.646777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.598 qpair failed and we were unable to recover it. 00:30:58.598 [2024-11-28 08:29:55.656652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.598 [2024-11-28 08:29:55.656696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.598 [2024-11-28 08:29:55.656709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.598 [2024-11-28 08:29:55.656716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.598 [2024-11-28 08:29:55.656722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.598 [2024-11-28 08:29:55.656736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.598 qpair failed and we were unable to recover it. 00:30:58.598 [2024-11-28 08:29:55.666748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.598 [2024-11-28 08:29:55.666804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.598 [2024-11-28 08:29:55.666818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.598 [2024-11-28 08:29:55.666825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.598 [2024-11-28 08:29:55.666831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.598 [2024-11-28 08:29:55.666844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.598 qpair failed and we were unable to recover it. 00:30:58.598 [2024-11-28 08:29:55.676796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.598 [2024-11-28 08:29:55.676854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.598 [2024-11-28 08:29:55.676867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.598 [2024-11-28 08:29:55.676874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.598 [2024-11-28 08:29:55.676880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.598 [2024-11-28 08:29:55.676895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.598 qpair failed and we were unable to recover it. 00:30:58.598 [2024-11-28 08:29:55.686812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.598 [2024-11-28 08:29:55.686916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.598 [2024-11-28 08:29:55.686930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.598 [2024-11-28 08:29:55.686937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.598 [2024-11-28 08:29:55.686943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.598 [2024-11-28 08:29:55.686957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.598 qpair failed and we were unable to recover it. 00:30:58.598 [2024-11-28 08:29:55.696796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.598 [2024-11-28 08:29:55.696851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.598 [2024-11-28 08:29:55.696876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.598 [2024-11-28 08:29:55.696884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.598 [2024-11-28 08:29:55.696891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.598 [2024-11-28 08:29:55.696910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.598 qpair failed and we were unable to recover it. 00:30:58.598 [2024-11-28 08:29:55.706877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.598 [2024-11-28 08:29:55.706940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.598 [2024-11-28 08:29:55.706956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.598 [2024-11-28 08:29:55.706963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.598 [2024-11-28 08:29:55.706974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.599 [2024-11-28 08:29:55.706991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.599 qpair failed and we were unable to recover it. 00:30:58.599 [2024-11-28 08:29:55.716932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.599 [2024-11-28 08:29:55.716990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.599 [2024-11-28 08:29:55.717015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.599 [2024-11-28 08:29:55.717023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.599 [2024-11-28 08:29:55.717030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.599 [2024-11-28 08:29:55.717050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.599 qpair failed and we were unable to recover it. 00:30:58.599 [2024-11-28 08:29:55.726922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.599 [2024-11-28 08:29:55.726975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.599 [2024-11-28 08:29:55.726991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.599 [2024-11-28 08:29:55.727002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.599 [2024-11-28 08:29:55.727009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.599 [2024-11-28 08:29:55.727026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.599 qpair failed and we were unable to recover it. 00:30:58.599 [2024-11-28 08:29:55.736904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.599 [2024-11-28 08:29:55.736946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.599 [2024-11-28 08:29:55.736960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.599 [2024-11-28 08:29:55.736967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.599 [2024-11-28 08:29:55.736974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.599 [2024-11-28 08:29:55.736988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.599 qpair failed and we were unable to recover it. 00:30:58.599 [2024-11-28 08:29:55.746874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.599 [2024-11-28 08:29:55.746927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.599 [2024-11-28 08:29:55.746942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.599 [2024-11-28 08:29:55.746949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.599 [2024-11-28 08:29:55.746956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.599 [2024-11-28 08:29:55.746970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.599 qpair failed and we were unable to recover it. 00:30:58.599 [2024-11-28 08:29:55.756915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.599 [2024-11-28 08:29:55.756973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.599 [2024-11-28 08:29:55.756999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.599 [2024-11-28 08:29:55.757008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.599 [2024-11-28 08:29:55.757015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.599 [2024-11-28 08:29:55.757034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.599 qpair failed and we were unable to recover it. 00:30:58.599 [2024-11-28 08:29:55.767077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.599 [2024-11-28 08:29:55.767154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.599 [2024-11-28 08:29:55.767173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.599 [2024-11-28 08:29:55.767180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.599 [2024-11-28 08:29:55.767187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.599 [2024-11-28 08:29:55.767207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.599 qpair failed and we were unable to recover it. 00:30:58.599 [2024-11-28 08:29:55.777026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.599 [2024-11-28 08:29:55.777071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.599 [2024-11-28 08:29:55.777085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.599 [2024-11-28 08:29:55.777092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.599 [2024-11-28 08:29:55.777098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.599 [2024-11-28 08:29:55.777112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.599 qpair failed and we were unable to recover it. 00:30:58.599 [2024-11-28 08:29:55.787078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.599 [2024-11-28 08:29:55.787137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.599 [2024-11-28 08:29:55.787151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.599 [2024-11-28 08:29:55.787162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.599 [2024-11-28 08:29:55.787169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.599 [2024-11-28 08:29:55.787184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.599 qpair failed and we were unable to recover it. 00:30:58.599 [2024-11-28 08:29:55.797117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.599 [2024-11-28 08:29:55.797174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.599 [2024-11-28 08:29:55.797187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.599 [2024-11-28 08:29:55.797194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.599 [2024-11-28 08:29:55.797200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.599 [2024-11-28 08:29:55.797214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.599 qpair failed and we were unable to recover it. 00:30:58.599 [2024-11-28 08:29:55.807149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.599 [2024-11-28 08:29:55.807205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.599 [2024-11-28 08:29:55.807219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.599 [2024-11-28 08:29:55.807226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.599 [2024-11-28 08:29:55.807232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.600 [2024-11-28 08:29:55.807247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.600 qpair failed and we were unable to recover it. 00:30:58.600 [2024-11-28 08:29:55.817135] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.600 [2024-11-28 08:29:55.817192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.600 [2024-11-28 08:29:55.817206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.600 [2024-11-28 08:29:55.817213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.600 [2024-11-28 08:29:55.817219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.600 [2024-11-28 08:29:55.817233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.600 qpair failed and we were unable to recover it. 00:30:58.600 [2024-11-28 08:29:55.827240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.600 [2024-11-28 08:29:55.827315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.600 [2024-11-28 08:29:55.827328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.600 [2024-11-28 08:29:55.827335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.600 [2024-11-28 08:29:55.827341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.600 [2024-11-28 08:29:55.827356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.600 qpair failed and we were unable to recover it. 00:30:58.600 [2024-11-28 08:29:55.837244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.600 [2024-11-28 08:29:55.837298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.600 [2024-11-28 08:29:55.837313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.600 [2024-11-28 08:29:55.837320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.600 [2024-11-28 08:29:55.837326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.600 [2024-11-28 08:29:55.837344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.600 qpair failed and we were unable to recover it. 00:30:58.600 [2024-11-28 08:29:55.847251] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.600 [2024-11-28 08:29:55.847305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.600 [2024-11-28 08:29:55.847319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.600 [2024-11-28 08:29:55.847326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.600 [2024-11-28 08:29:55.847332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.600 [2024-11-28 08:29:55.847346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.600 qpair failed and we were unable to recover it. 00:30:58.600 [2024-11-28 08:29:55.857238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.600 [2024-11-28 08:29:55.857311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.600 [2024-11-28 08:29:55.857328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.600 [2024-11-28 08:29:55.857335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.600 [2024-11-28 08:29:55.857341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.600 [2024-11-28 08:29:55.857356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.600 qpair failed and we were unable to recover it. 00:30:58.600 [2024-11-28 08:29:55.867346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.600 [2024-11-28 08:29:55.867423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.600 [2024-11-28 08:29:55.867436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.600 [2024-11-28 08:29:55.867443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.600 [2024-11-28 08:29:55.867449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.600 [2024-11-28 08:29:55.867463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.600 qpair failed and we were unable to recover it. 00:30:58.600 [2024-11-28 08:29:55.877232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.600 [2024-11-28 08:29:55.877284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.600 [2024-11-28 08:29:55.877299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.600 [2024-11-28 08:29:55.877306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.600 [2024-11-28 08:29:55.877312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.600 [2024-11-28 08:29:55.877327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.600 qpair failed and we were unable to recover it. 00:30:58.869 [2024-11-28 08:29:55.887368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.869 [2024-11-28 08:29:55.887422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.869 [2024-11-28 08:29:55.887435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.869 [2024-11-28 08:29:55.887442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.869 [2024-11-28 08:29:55.887449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.869 [2024-11-28 08:29:55.887464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.869 qpair failed and we were unable to recover it. 00:30:58.869 [2024-11-28 08:29:55.897342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.870 [2024-11-28 08:29:55.897390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.870 [2024-11-28 08:29:55.897403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.870 [2024-11-28 08:29:55.897410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.870 [2024-11-28 08:29:55.897417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.870 [2024-11-28 08:29:55.897434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.870 qpair failed and we were unable to recover it. 00:30:58.870 [2024-11-28 08:29:55.907407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.870 [2024-11-28 08:29:55.907462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.870 [2024-11-28 08:29:55.907476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.870 [2024-11-28 08:29:55.907483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.870 [2024-11-28 08:29:55.907490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.870 [2024-11-28 08:29:55.907504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.870 qpair failed and we were unable to recover it. 00:30:58.870 [2024-11-28 08:29:55.917441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.870 [2024-11-28 08:29:55.917495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.870 [2024-11-28 08:29:55.917508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.870 [2024-11-28 08:29:55.917515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.870 [2024-11-28 08:29:55.917522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.870 [2024-11-28 08:29:55.917536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.870 qpair failed and we were unable to recover it. 00:30:58.870 [2024-11-28 08:29:55.927518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.870 [2024-11-28 08:29:55.927564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.870 [2024-11-28 08:29:55.927577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.870 [2024-11-28 08:29:55.927584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.870 [2024-11-28 08:29:55.927591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.870 [2024-11-28 08:29:55.927604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.870 qpair failed and we were unable to recover it. 00:30:58.870 [2024-11-28 08:29:55.937497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.870 [2024-11-28 08:29:55.937575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.870 [2024-11-28 08:29:55.937588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.870 [2024-11-28 08:29:55.937594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.870 [2024-11-28 08:29:55.937601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.870 [2024-11-28 08:29:55.937615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.870 qpair failed and we were unable to recover it. 00:30:58.870 [2024-11-28 08:29:55.947550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.870 [2024-11-28 08:29:55.947608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.870 [2024-11-28 08:29:55.947621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.870 [2024-11-28 08:29:55.947628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.870 [2024-11-28 08:29:55.947634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.870 [2024-11-28 08:29:55.947648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.870 qpair failed and we were unable to recover it. 00:30:58.870 [2024-11-28 08:29:55.957584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.870 [2024-11-28 08:29:55.957634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.870 [2024-11-28 08:29:55.957648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.870 [2024-11-28 08:29:55.957654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.870 [2024-11-28 08:29:55.957661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.870 [2024-11-28 08:29:55.957674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.870 qpair failed and we were unable to recover it. 00:30:58.870 [2024-11-28 08:29:55.967562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.870 [2024-11-28 08:29:55.967617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.870 [2024-11-28 08:29:55.967630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.870 [2024-11-28 08:29:55.967637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.870 [2024-11-28 08:29:55.967643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.870 [2024-11-28 08:29:55.967657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.870 qpair failed and we were unable to recover it. 00:30:58.870 [2024-11-28 08:29:55.977567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.870 [2024-11-28 08:29:55.977626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.870 [2024-11-28 08:29:55.977640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.870 [2024-11-28 08:29:55.977646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.870 [2024-11-28 08:29:55.977653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.870 [2024-11-28 08:29:55.977667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.870 qpair failed and we were unable to recover it. 00:30:58.870 [2024-11-28 08:29:55.987612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.870 [2024-11-28 08:29:55.987670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.870 [2024-11-28 08:29:55.987686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.870 [2024-11-28 08:29:55.987693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.870 [2024-11-28 08:29:55.987700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.870 [2024-11-28 08:29:55.987714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.870 qpair failed and we were unable to recover it. 00:30:58.870 [2024-11-28 08:29:55.997682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.870 [2024-11-28 08:29:55.997741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.870 [2024-11-28 08:29:55.997754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.870 [2024-11-28 08:29:55.997761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.870 [2024-11-28 08:29:55.997767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.870 [2024-11-28 08:29:55.997781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.870 qpair failed and we were unable to recover it. 00:30:58.870 [2024-11-28 08:29:56.007696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.870 [2024-11-28 08:29:56.007748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.870 [2024-11-28 08:29:56.007761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.870 [2024-11-28 08:29:56.007768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.870 [2024-11-28 08:29:56.007774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.870 [2024-11-28 08:29:56.007789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.870 qpair failed and we were unable to recover it. 00:30:58.870 [2024-11-28 08:29:56.017677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.870 [2024-11-28 08:29:56.017726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.870 [2024-11-28 08:29:56.017739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.870 [2024-11-28 08:29:56.017746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.870 [2024-11-28 08:29:56.017753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.871 [2024-11-28 08:29:56.017767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.871 qpair failed and we were unable to recover it. 00:30:58.871 [2024-11-28 08:29:56.027777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.871 [2024-11-28 08:29:56.027836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.871 [2024-11-28 08:29:56.027849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.871 [2024-11-28 08:29:56.027856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.871 [2024-11-28 08:29:56.027866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.871 [2024-11-28 08:29:56.027880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.871 qpair failed and we were unable to recover it. 00:30:58.871 [2024-11-28 08:29:56.037776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.871 [2024-11-28 08:29:56.037832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.871 [2024-11-28 08:29:56.037845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.871 [2024-11-28 08:29:56.037852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.871 [2024-11-28 08:29:56.037858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.871 [2024-11-28 08:29:56.037872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.871 qpair failed and we were unable to recover it. 00:30:58.871 [2024-11-28 08:29:56.047785] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.871 [2024-11-28 08:29:56.047857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.871 [2024-11-28 08:29:56.047871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.871 [2024-11-28 08:29:56.047877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.871 [2024-11-28 08:29:56.047884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.871 [2024-11-28 08:29:56.047897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.871 qpair failed and we were unable to recover it. 00:30:58.871 [2024-11-28 08:29:56.057786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.871 [2024-11-28 08:29:56.057836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.871 [2024-11-28 08:29:56.057861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.871 [2024-11-28 08:29:56.057870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.871 [2024-11-28 08:29:56.057877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.871 [2024-11-28 08:29:56.057896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.871 qpair failed and we were unable to recover it. 00:30:58.871 [2024-11-28 08:29:56.067872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.871 [2024-11-28 08:29:56.067929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.871 [2024-11-28 08:29:56.067954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.871 [2024-11-28 08:29:56.067962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.871 [2024-11-28 08:29:56.067969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.871 [2024-11-28 08:29:56.067989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.871 qpair failed and we were unable to recover it. 00:30:58.871 [2024-11-28 08:29:56.077782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.871 [2024-11-28 08:29:56.077837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.871 [2024-11-28 08:29:56.077852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.871 [2024-11-28 08:29:56.077859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.871 [2024-11-28 08:29:56.077865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.871 [2024-11-28 08:29:56.077881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.871 qpair failed and we were unable to recover it. 00:30:58.871 [2024-11-28 08:29:56.087926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.871 [2024-11-28 08:29:56.087981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.871 [2024-11-28 08:29:56.087996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.871 [2024-11-28 08:29:56.088002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.871 [2024-11-28 08:29:56.088009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.871 [2024-11-28 08:29:56.088023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.871 qpair failed and we were unable to recover it. 00:30:58.871 [2024-11-28 08:29:56.097871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.871 [2024-11-28 08:29:56.097916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.871 [2024-11-28 08:29:56.097930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.871 [2024-11-28 08:29:56.097937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.871 [2024-11-28 08:29:56.097943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.871 [2024-11-28 08:29:56.097957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.871 qpair failed and we were unable to recover it. 00:30:58.871 [2024-11-28 08:29:56.107986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.871 [2024-11-28 08:29:56.108039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.871 [2024-11-28 08:29:56.108053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.871 [2024-11-28 08:29:56.108060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.871 [2024-11-28 08:29:56.108066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.871 [2024-11-28 08:29:56.108080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.871 qpair failed and we were unable to recover it. 00:30:58.871 [2024-11-28 08:29:56.118009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.871 [2024-11-28 08:29:56.118063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.871 [2024-11-28 08:29:56.118080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.871 [2024-11-28 08:29:56.118087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.871 [2024-11-28 08:29:56.118094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.871 [2024-11-28 08:29:56.118108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.871 qpair failed and we were unable to recover it. 00:30:58.871 [2024-11-28 08:29:56.128031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.871 [2024-11-28 08:29:56.128083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.871 [2024-11-28 08:29:56.128096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.871 [2024-11-28 08:29:56.128103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.871 [2024-11-28 08:29:56.128110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.871 [2024-11-28 08:29:56.128124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.871 qpair failed and we were unable to recover it. 00:30:58.871 [2024-11-28 08:29:56.138042] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.871 [2024-11-28 08:29:56.138088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.871 [2024-11-28 08:29:56.138102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.871 [2024-11-28 08:29:56.138109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.871 [2024-11-28 08:29:56.138115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.871 [2024-11-28 08:29:56.138129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.871 qpair failed and we were unable to recover it. 00:30:58.871 [2024-11-28 08:29:56.148105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.871 [2024-11-28 08:29:56.148183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.871 [2024-11-28 08:29:56.148196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.871 [2024-11-28 08:29:56.148203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.871 [2024-11-28 08:29:56.148211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:58.872 [2024-11-28 08:29:56.148225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:58.872 qpair failed and we were unable to recover it. 00:30:59.136 [2024-11-28 08:29:56.158121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.136 [2024-11-28 08:29:56.158184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.136 [2024-11-28 08:29:56.158205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.136 [2024-11-28 08:29:56.158216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.136 [2024-11-28 08:29:56.158223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.136 [2024-11-28 08:29:56.158242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.136 qpair failed and we were unable to recover it. 00:30:59.136 [2024-11-28 08:29:56.168174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.136 [2024-11-28 08:29:56.168224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.136 [2024-11-28 08:29:56.168237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.136 [2024-11-28 08:29:56.168244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.136 [2024-11-28 08:29:56.168250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.136 [2024-11-28 08:29:56.168265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.136 qpair failed and we were unable to recover it. 00:30:59.136 [2024-11-28 08:29:56.178143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.136 [2024-11-28 08:29:56.178193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.136 [2024-11-28 08:29:56.178207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.136 [2024-11-28 08:29:56.178214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.136 [2024-11-28 08:29:56.178220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.136 [2024-11-28 08:29:56.178234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.136 qpair failed and we were unable to recover it. 00:30:59.136 [2024-11-28 08:29:56.188207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.136 [2024-11-28 08:29:56.188265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.136 [2024-11-28 08:29:56.188280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.136 [2024-11-28 08:29:56.188287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.136 [2024-11-28 08:29:56.188295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.136 [2024-11-28 08:29:56.188314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.136 qpair failed and we were unable to recover it. 00:30:59.136 [2024-11-28 08:29:56.198105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.136 [2024-11-28 08:29:56.198177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.136 [2024-11-28 08:29:56.198191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.136 [2024-11-28 08:29:56.198198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.136 [2024-11-28 08:29:56.198204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.136 [2024-11-28 08:29:56.198220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.136 qpair failed and we were unable to recover it. 00:30:59.136 [2024-11-28 08:29:56.208206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.136 [2024-11-28 08:29:56.208259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.136 [2024-11-28 08:29:56.208272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.136 [2024-11-28 08:29:56.208279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.136 [2024-11-28 08:29:56.208286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.136 [2024-11-28 08:29:56.208301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.136 qpair failed and we were unable to recover it. 00:30:59.136 [2024-11-28 08:29:56.218236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.136 [2024-11-28 08:29:56.218282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.136 [2024-11-28 08:29:56.218295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.136 [2024-11-28 08:29:56.218302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.136 [2024-11-28 08:29:56.218308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.136 [2024-11-28 08:29:56.218322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.136 qpair failed and we were unable to recover it. 00:30:59.136 [2024-11-28 08:29:56.228284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.136 [2024-11-28 08:29:56.228341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.136 [2024-11-28 08:29:56.228354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.136 [2024-11-28 08:29:56.228361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.136 [2024-11-28 08:29:56.228367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.136 [2024-11-28 08:29:56.228381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.136 qpair failed and we were unable to recover it. 00:30:59.136 [2024-11-28 08:29:56.238343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.136 [2024-11-28 08:29:56.238396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.136 [2024-11-28 08:29:56.238409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.136 [2024-11-28 08:29:56.238416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.136 [2024-11-28 08:29:56.238423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.136 [2024-11-28 08:29:56.238436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.136 qpair failed and we were unable to recover it. 00:30:59.136 [2024-11-28 08:29:56.248334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.136 [2024-11-28 08:29:56.248408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.136 [2024-11-28 08:29:56.248423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.136 [2024-11-28 08:29:56.248430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.136 [2024-11-28 08:29:56.248436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.136 [2024-11-28 08:29:56.248450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.136 qpair failed and we were unable to recover it. 00:30:59.136 [2024-11-28 08:29:56.258344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.136 [2024-11-28 08:29:56.258391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.136 [2024-11-28 08:29:56.258404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.136 [2024-11-28 08:29:56.258411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.136 [2024-11-28 08:29:56.258417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.136 [2024-11-28 08:29:56.258431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.137 qpair failed and we were unable to recover it. 00:30:59.137 [2024-11-28 08:29:56.268405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.137 [2024-11-28 08:29:56.268461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.137 [2024-11-28 08:29:56.268474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.137 [2024-11-28 08:29:56.268481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.137 [2024-11-28 08:29:56.268488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.137 [2024-11-28 08:29:56.268502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.137 qpair failed and we were unable to recover it. 00:30:59.137 [2024-11-28 08:29:56.278330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.137 [2024-11-28 08:29:56.278389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.137 [2024-11-28 08:29:56.278402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.137 [2024-11-28 08:29:56.278409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.137 [2024-11-28 08:29:56.278416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.137 [2024-11-28 08:29:56.278429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.137 qpair failed and we were unable to recover it. 00:30:59.137 [2024-11-28 08:29:56.288480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.137 [2024-11-28 08:29:56.288562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.137 [2024-11-28 08:29:56.288575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.137 [2024-11-28 08:29:56.288586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.137 [2024-11-28 08:29:56.288592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.137 [2024-11-28 08:29:56.288606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.137 qpair failed and we were unable to recover it. 00:30:59.137 [2024-11-28 08:29:56.298443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.137 [2024-11-28 08:29:56.298491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.137 [2024-11-28 08:29:56.298504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.137 [2024-11-28 08:29:56.298511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.137 [2024-11-28 08:29:56.298517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.137 [2024-11-28 08:29:56.298531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.137 qpair failed and we were unable to recover it. 00:30:59.137 [2024-11-28 08:29:56.308544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.137 [2024-11-28 08:29:56.308600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.137 [2024-11-28 08:29:56.308613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.137 [2024-11-28 08:29:56.308620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.137 [2024-11-28 08:29:56.308626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.137 [2024-11-28 08:29:56.308640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.137 qpair failed and we were unable to recover it. 00:30:59.137 [2024-11-28 08:29:56.318536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.137 [2024-11-28 08:29:56.318590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.137 [2024-11-28 08:29:56.318603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.137 [2024-11-28 08:29:56.318610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.137 [2024-11-28 08:29:56.318617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.137 [2024-11-28 08:29:56.318630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.137 qpair failed and we were unable to recover it. 00:30:59.137 [2024-11-28 08:29:56.328582] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.137 [2024-11-28 08:29:56.328665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.137 [2024-11-28 08:29:56.328678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.137 [2024-11-28 08:29:56.328685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.137 [2024-11-28 08:29:56.328691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.137 [2024-11-28 08:29:56.328709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.137 qpair failed and we were unable to recover it. 00:30:59.137 [2024-11-28 08:29:56.338551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.137 [2024-11-28 08:29:56.338597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.137 [2024-11-28 08:29:56.338610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.137 [2024-11-28 08:29:56.338617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.137 [2024-11-28 08:29:56.338623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.137 [2024-11-28 08:29:56.338637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.137 qpair failed and we were unable to recover it. 00:30:59.137 [2024-11-28 08:29:56.348626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.137 [2024-11-28 08:29:56.348688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.137 [2024-11-28 08:29:56.348700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.137 [2024-11-28 08:29:56.348707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.137 [2024-11-28 08:29:56.348714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.137 [2024-11-28 08:29:56.348728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.137 qpair failed and we were unable to recover it. 00:30:59.137 [2024-11-28 08:29:56.358660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.137 [2024-11-28 08:29:56.358727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.137 [2024-11-28 08:29:56.358740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.137 [2024-11-28 08:29:56.358747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.137 [2024-11-28 08:29:56.358753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.137 [2024-11-28 08:29:56.358767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.137 qpair failed and we were unable to recover it. 00:30:59.137 [2024-11-28 08:29:56.368693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.137 [2024-11-28 08:29:56.368743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.137 [2024-11-28 08:29:56.368756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.137 [2024-11-28 08:29:56.368763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.137 [2024-11-28 08:29:56.368769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.137 [2024-11-28 08:29:56.368783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.137 qpair failed and we were unable to recover it. 00:30:59.137 [2024-11-28 08:29:56.378646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.137 [2024-11-28 08:29:56.378695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.137 [2024-11-28 08:29:56.378708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.137 [2024-11-28 08:29:56.378715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.137 [2024-11-28 08:29:56.378722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.137 [2024-11-28 08:29:56.378736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.137 qpair failed and we were unable to recover it. 00:30:59.137 [2024-11-28 08:29:56.388771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.137 [2024-11-28 08:29:56.388828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.137 [2024-11-28 08:29:56.388841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.137 [2024-11-28 08:29:56.388848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.138 [2024-11-28 08:29:56.388854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.138 [2024-11-28 08:29:56.388868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.138 qpair failed and we were unable to recover it. 00:30:59.138 [2024-11-28 08:29:56.398789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.138 [2024-11-28 08:29:56.398841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.138 [2024-11-28 08:29:56.398855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.138 [2024-11-28 08:29:56.398862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.138 [2024-11-28 08:29:56.398868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.138 [2024-11-28 08:29:56.398882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.138 qpair failed and we were unable to recover it. 00:30:59.138 [2024-11-28 08:29:56.408695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.138 [2024-11-28 08:29:56.408744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.138 [2024-11-28 08:29:56.408757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.138 [2024-11-28 08:29:56.408764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.138 [2024-11-28 08:29:56.408770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.138 [2024-11-28 08:29:56.408784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.138 qpair failed and we were unable to recover it. 00:30:59.138 [2024-11-28 08:29:56.418658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.138 [2024-11-28 08:29:56.418710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.138 [2024-11-28 08:29:56.418727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.138 [2024-11-28 08:29:56.418735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.138 [2024-11-28 08:29:56.418741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.138 [2024-11-28 08:29:56.418756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.138 qpair failed and we were unable to recover it. 00:30:59.401 [2024-11-28 08:29:56.428814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.402 [2024-11-28 08:29:56.428872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.402 [2024-11-28 08:29:56.428886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.402 [2024-11-28 08:29:56.428893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.402 [2024-11-28 08:29:56.428899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.402 [2024-11-28 08:29:56.428914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.402 qpair failed and we were unable to recover it. 00:30:59.402 [2024-11-28 08:29:56.438881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.402 [2024-11-28 08:29:56.438932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.402 [2024-11-28 08:29:56.438945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.402 [2024-11-28 08:29:56.438953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.402 [2024-11-28 08:29:56.438959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.402 [2024-11-28 08:29:56.438973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.402 qpair failed and we were unable to recover it. 00:30:59.402 [2024-11-28 08:29:56.448891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.402 [2024-11-28 08:29:56.448937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.402 [2024-11-28 08:29:56.448950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.402 [2024-11-28 08:29:56.448957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.402 [2024-11-28 08:29:56.448963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.402 [2024-11-28 08:29:56.448977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.402 qpair failed and we were unable to recover it. 00:30:59.402 [2024-11-28 08:29:56.458890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.402 [2024-11-28 08:29:56.458934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.402 [2024-11-28 08:29:56.458947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.402 [2024-11-28 08:29:56.458954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.402 [2024-11-28 08:29:56.458960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.402 [2024-11-28 08:29:56.458981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.402 qpair failed and we were unable to recover it. 00:30:59.402 [2024-11-28 08:29:56.468959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.402 [2024-11-28 08:29:56.469017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.402 [2024-11-28 08:29:56.469042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.402 [2024-11-28 08:29:56.469051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.402 [2024-11-28 08:29:56.469058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.402 [2024-11-28 08:29:56.469077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.402 qpair failed and we were unable to recover it. 00:30:59.402 [2024-11-28 08:29:56.478959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.402 [2024-11-28 08:29:56.479012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.402 [2024-11-28 08:29:56.479027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.402 [2024-11-28 08:29:56.479034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.402 [2024-11-28 08:29:56.479041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.402 [2024-11-28 08:29:56.479056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.402 qpair failed and we were unable to recover it. 00:30:59.402 [2024-11-28 08:29:56.489016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.402 [2024-11-28 08:29:56.489072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.402 [2024-11-28 08:29:56.489087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.402 [2024-11-28 08:29:56.489094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.402 [2024-11-28 08:29:56.489100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.402 [2024-11-28 08:29:56.489114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.402 qpair failed and we were unable to recover it. 00:30:59.402 [2024-11-28 08:29:56.498999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.402 [2024-11-28 08:29:56.499050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.402 [2024-11-28 08:29:56.499063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.402 [2024-11-28 08:29:56.499070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.402 [2024-11-28 08:29:56.499077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.402 [2024-11-28 08:29:56.499091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.402 qpair failed and we were unable to recover it. 00:30:59.402 [2024-11-28 08:29:56.508960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.402 [2024-11-28 08:29:56.509014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.402 [2024-11-28 08:29:56.509028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.402 [2024-11-28 08:29:56.509035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.402 [2024-11-28 08:29:56.509041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.402 [2024-11-28 08:29:56.509055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.402 qpair failed and we were unable to recover it. 00:30:59.402 [2024-11-28 08:29:56.519093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.402 [2024-11-28 08:29:56.519145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.402 [2024-11-28 08:29:56.519162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.402 [2024-11-28 08:29:56.519170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.402 [2024-11-28 08:29:56.519176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.402 [2024-11-28 08:29:56.519191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.402 qpair failed and we were unable to recover it. 00:30:59.402 [2024-11-28 08:29:56.529121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.402 [2024-11-28 08:29:56.529174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.402 [2024-11-28 08:29:56.529188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.402 [2024-11-28 08:29:56.529195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.402 [2024-11-28 08:29:56.529201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.402 [2024-11-28 08:29:56.529216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.402 qpair failed and we were unable to recover it. 00:30:59.402 [2024-11-28 08:29:56.539164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.402 [2024-11-28 08:29:56.539209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.402 [2024-11-28 08:29:56.539222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.402 [2024-11-28 08:29:56.539243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.402 [2024-11-28 08:29:56.539249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.402 [2024-11-28 08:29:56.539264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.402 qpair failed and we were unable to recover it. 00:30:59.402 [2024-11-28 08:29:56.549186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.402 [2024-11-28 08:29:56.549265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.402 [2024-11-28 08:29:56.549282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.402 [2024-11-28 08:29:56.549290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.402 [2024-11-28 08:29:56.549296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.402 [2024-11-28 08:29:56.549310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.402 qpair failed and we were unable to recover it. 00:30:59.403 [2024-11-28 08:29:56.559220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.403 [2024-11-28 08:29:56.559275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.403 [2024-11-28 08:29:56.559288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.403 [2024-11-28 08:29:56.559295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.403 [2024-11-28 08:29:56.559301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.403 [2024-11-28 08:29:56.559316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.403 qpair failed and we were unable to recover it. 00:30:59.403 [2024-11-28 08:29:56.569231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.403 [2024-11-28 08:29:56.569311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.403 [2024-11-28 08:29:56.569324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.403 [2024-11-28 08:29:56.569330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.403 [2024-11-28 08:29:56.569337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.403 [2024-11-28 08:29:56.569351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.403 qpair failed and we were unable to recover it. 00:30:59.403 [2024-11-28 08:29:56.579196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.403 [2024-11-28 08:29:56.579291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.403 [2024-11-28 08:29:56.579304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.403 [2024-11-28 08:29:56.579311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.403 [2024-11-28 08:29:56.579318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.403 [2024-11-28 08:29:56.579332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.403 qpair failed and we were unable to recover it. 00:30:59.403 [2024-11-28 08:29:56.589303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.403 [2024-11-28 08:29:56.589354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.403 [2024-11-28 08:29:56.589368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.403 [2024-11-28 08:29:56.589375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.403 [2024-11-28 08:29:56.589384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.403 [2024-11-28 08:29:56.589400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.403 qpair failed and we were unable to recover it. 00:30:59.403 [2024-11-28 08:29:56.599349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.403 [2024-11-28 08:29:56.599427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.403 [2024-11-28 08:29:56.599440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.403 [2024-11-28 08:29:56.599447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.403 [2024-11-28 08:29:56.599453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.403 [2024-11-28 08:29:56.599467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.403 qpair failed and we were unable to recover it. 00:30:59.403 [2024-11-28 08:29:56.609227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.403 [2024-11-28 08:29:56.609295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.403 [2024-11-28 08:29:56.609308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.403 [2024-11-28 08:29:56.609315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.403 [2024-11-28 08:29:56.609321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.403 [2024-11-28 08:29:56.609335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.403 qpair failed and we were unable to recover it. 00:30:59.403 [2024-11-28 08:29:56.619335] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.403 [2024-11-28 08:29:56.619388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.403 [2024-11-28 08:29:56.619401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.403 [2024-11-28 08:29:56.619408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.403 [2024-11-28 08:29:56.619414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.403 [2024-11-28 08:29:56.619428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.403 qpair failed and we were unable to recover it. 00:30:59.403 [2024-11-28 08:29:56.629419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.403 [2024-11-28 08:29:56.629474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.403 [2024-11-28 08:29:56.629487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.403 [2024-11-28 08:29:56.629494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.403 [2024-11-28 08:29:56.629501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.403 [2024-11-28 08:29:56.629515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.403 qpair failed and we were unable to recover it. 00:30:59.403 [2024-11-28 08:29:56.639455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.403 [2024-11-28 08:29:56.639510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.403 [2024-11-28 08:29:56.639523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.403 [2024-11-28 08:29:56.639530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.403 [2024-11-28 08:29:56.639537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.403 [2024-11-28 08:29:56.639551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.403 qpair failed and we were unable to recover it. 00:30:59.403 [2024-11-28 08:29:56.649464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.403 [2024-11-28 08:29:56.649511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.403 [2024-11-28 08:29:56.649524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.403 [2024-11-28 08:29:56.649531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.403 [2024-11-28 08:29:56.649537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.403 [2024-11-28 08:29:56.649551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.403 qpair failed and we were unable to recover it. 00:30:59.403 [2024-11-28 08:29:56.659460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.403 [2024-11-28 08:29:56.659510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.403 [2024-11-28 08:29:56.659522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.403 [2024-11-28 08:29:56.659529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.403 [2024-11-28 08:29:56.659536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.403 [2024-11-28 08:29:56.659550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.403 qpair failed and we were unable to recover it. 00:30:59.403 [2024-11-28 08:29:56.669534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.403 [2024-11-28 08:29:56.669590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.403 [2024-11-28 08:29:56.669602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.403 [2024-11-28 08:29:56.669609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.403 [2024-11-28 08:29:56.669616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.403 [2024-11-28 08:29:56.669630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.403 qpair failed and we were unable to recover it. 00:30:59.403 [2024-11-28 08:29:56.679586] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.403 [2024-11-28 08:29:56.679641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.403 [2024-11-28 08:29:56.679657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.403 [2024-11-28 08:29:56.679664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.403 [2024-11-28 08:29:56.679670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.403 [2024-11-28 08:29:56.679684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.403 qpair failed and we were unable to recover it. 00:30:59.666 [2024-11-28 08:29:56.689548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.666 [2024-11-28 08:29:56.689597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.666 [2024-11-28 08:29:56.689610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.666 [2024-11-28 08:29:56.689617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.666 [2024-11-28 08:29:56.689623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.666 [2024-11-28 08:29:56.689637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.666 qpair failed and we were unable to recover it. 00:30:59.666 [2024-11-28 08:29:56.699559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.666 [2024-11-28 08:29:56.699613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.666 [2024-11-28 08:29:56.699626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.666 [2024-11-28 08:29:56.699633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.666 [2024-11-28 08:29:56.699640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.666 [2024-11-28 08:29:56.699654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.666 qpair failed and we were unable to recover it. 00:30:59.666 [2024-11-28 08:29:56.709618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.666 [2024-11-28 08:29:56.709672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.666 [2024-11-28 08:29:56.709685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.666 [2024-11-28 08:29:56.709692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.667 [2024-11-28 08:29:56.709699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.667 [2024-11-28 08:29:56.709713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.667 qpair failed and we were unable to recover it. 00:30:59.667 [2024-11-28 08:29:56.719675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.667 [2024-11-28 08:29:56.719769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.667 [2024-11-28 08:29:56.719782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.667 [2024-11-28 08:29:56.719792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.667 [2024-11-28 08:29:56.719799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.667 [2024-11-28 08:29:56.719813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.667 qpair failed and we were unable to recover it. 00:30:59.667 [2024-11-28 08:29:56.729696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.667 [2024-11-28 08:29:56.729749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.667 [2024-11-28 08:29:56.729762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.667 [2024-11-28 08:29:56.729769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.667 [2024-11-28 08:29:56.729775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.667 [2024-11-28 08:29:56.729789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.667 qpair failed and we were unable to recover it. 00:30:59.667 [2024-11-28 08:29:56.739697] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.667 [2024-11-28 08:29:56.739742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.667 [2024-11-28 08:29:56.739755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.667 [2024-11-28 08:29:56.739762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.667 [2024-11-28 08:29:56.739768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.667 [2024-11-28 08:29:56.739782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.667 qpair failed and we were unable to recover it. 00:30:59.667 [2024-11-28 08:29:56.749750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.667 [2024-11-28 08:29:56.749800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.667 [2024-11-28 08:29:56.749813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.667 [2024-11-28 08:29:56.749820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.667 [2024-11-28 08:29:56.749826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.667 [2024-11-28 08:29:56.749840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.667 qpair failed and we were unable to recover it. 00:30:59.667 [2024-11-28 08:29:56.759787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.667 [2024-11-28 08:29:56.759846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.667 [2024-11-28 08:29:56.759861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.667 [2024-11-28 08:29:56.759868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.667 [2024-11-28 08:29:56.759874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.667 [2024-11-28 08:29:56.759892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.667 qpair failed and we were unable to recover it. 00:30:59.667 [2024-11-28 08:29:56.769781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.667 [2024-11-28 08:29:56.769837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.667 [2024-11-28 08:29:56.769851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.667 [2024-11-28 08:29:56.769858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.667 [2024-11-28 08:29:56.769864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.667 [2024-11-28 08:29:56.769878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.667 qpair failed and we were unable to recover it. 00:30:59.667 [2024-11-28 08:29:56.779787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.667 [2024-11-28 08:29:56.779838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.667 [2024-11-28 08:29:56.779851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.667 [2024-11-28 08:29:56.779858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.667 [2024-11-28 08:29:56.779865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.667 [2024-11-28 08:29:56.779879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.667 qpair failed and we were unable to recover it. 00:30:59.667 [2024-11-28 08:29:56.789872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.667 [2024-11-28 08:29:56.789929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.667 [2024-11-28 08:29:56.789943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.667 [2024-11-28 08:29:56.789950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.667 [2024-11-28 08:29:56.789956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.667 [2024-11-28 08:29:56.789970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.667 qpair failed and we were unable to recover it. 00:30:59.667 [2024-11-28 08:29:56.799923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.667 [2024-11-28 08:29:56.799989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.667 [2024-11-28 08:29:56.800002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.667 [2024-11-28 08:29:56.800009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.667 [2024-11-28 08:29:56.800015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.667 [2024-11-28 08:29:56.800030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.667 qpair failed and we were unable to recover it. 00:30:59.667 [2024-11-28 08:29:56.809929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.667 [2024-11-28 08:29:56.809983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.667 [2024-11-28 08:29:56.809996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.667 [2024-11-28 08:29:56.810003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.667 [2024-11-28 08:29:56.810009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.667 [2024-11-28 08:29:56.810023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.667 qpair failed and we were unable to recover it. 00:30:59.667 [2024-11-28 08:29:56.819886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.667 [2024-11-28 08:29:56.819960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.667 [2024-11-28 08:29:56.819973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.667 [2024-11-28 08:29:56.819980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.667 [2024-11-28 08:29:56.819986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.667 [2024-11-28 08:29:56.820000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.667 qpair failed and we were unable to recover it. 00:30:59.667 [2024-11-28 08:29:56.829975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.667 [2024-11-28 08:29:56.830028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.667 [2024-11-28 08:29:56.830041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.667 [2024-11-28 08:29:56.830047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.667 [2024-11-28 08:29:56.830054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.667 [2024-11-28 08:29:56.830067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.667 qpair failed and we were unable to recover it. 00:30:59.667 [2024-11-28 08:29:56.839885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.667 [2024-11-28 08:29:56.839940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.667 [2024-11-28 08:29:56.839954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.667 [2024-11-28 08:29:56.839961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.667 [2024-11-28 08:29:56.839968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.668 [2024-11-28 08:29:56.839983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.668 qpair failed and we were unable to recover it. 00:30:59.668 [2024-11-28 08:29:56.849881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.668 [2024-11-28 08:29:56.849933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.668 [2024-11-28 08:29:56.849946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.668 [2024-11-28 08:29:56.849956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.668 [2024-11-28 08:29:56.849963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.668 [2024-11-28 08:29:56.849977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.668 qpair failed and we were unable to recover it. 00:30:59.668 [2024-11-28 08:29:56.859962] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.668 [2024-11-28 08:29:56.860009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.668 [2024-11-28 08:29:56.860022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.668 [2024-11-28 08:29:56.860029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.668 [2024-11-28 08:29:56.860035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.668 [2024-11-28 08:29:56.860049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.668 qpair failed and we were unable to recover it. 00:30:59.668 [2024-11-28 08:29:56.870077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.668 [2024-11-28 08:29:56.870134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.668 [2024-11-28 08:29:56.870147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.668 [2024-11-28 08:29:56.870154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.668 [2024-11-28 08:29:56.870164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.668 [2024-11-28 08:29:56.870178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.668 qpair failed and we were unable to recover it. 00:30:59.668 [2024-11-28 08:29:56.880085] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.668 [2024-11-28 08:29:56.880178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.668 [2024-11-28 08:29:56.880192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.668 [2024-11-28 08:29:56.880199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.668 [2024-11-28 08:29:56.880205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.668 [2024-11-28 08:29:56.880219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.668 qpair failed and we were unable to recover it. 00:30:59.668 [2024-11-28 08:29:56.890121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.668 [2024-11-28 08:29:56.890176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.668 [2024-11-28 08:29:56.890189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.668 [2024-11-28 08:29:56.890197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.668 [2024-11-28 08:29:56.890203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.668 [2024-11-28 08:29:56.890221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.668 qpair failed and we were unable to recover it. 00:30:59.668 [2024-11-28 08:29:56.900118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.668 [2024-11-28 08:29:56.900177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.668 [2024-11-28 08:29:56.900191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.668 [2024-11-28 08:29:56.900198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.668 [2024-11-28 08:29:56.900204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.668 [2024-11-28 08:29:56.900218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.668 qpair failed and we were unable to recover it. 00:30:59.668 [2024-11-28 08:29:56.910179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.668 [2024-11-28 08:29:56.910235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.668 [2024-11-28 08:29:56.910248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.668 [2024-11-28 08:29:56.910256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.668 [2024-11-28 08:29:56.910263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.668 [2024-11-28 08:29:56.910277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.668 qpair failed and we were unable to recover it. 00:30:59.668 [2024-11-28 08:29:56.920210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.668 [2024-11-28 08:29:56.920264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.668 [2024-11-28 08:29:56.920278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.668 [2024-11-28 08:29:56.920285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.668 [2024-11-28 08:29:56.920291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.668 [2024-11-28 08:29:56.920305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.668 qpair failed and we were unable to recover it. 00:30:59.668 [2024-11-28 08:29:56.930229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.668 [2024-11-28 08:29:56.930279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.668 [2024-11-28 08:29:56.930292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.668 [2024-11-28 08:29:56.930299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.668 [2024-11-28 08:29:56.930306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.668 [2024-11-28 08:29:56.930320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.668 qpair failed and we were unable to recover it. 00:30:59.668 [2024-11-28 08:29:56.940224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.668 [2024-11-28 08:29:56.940274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.668 [2024-11-28 08:29:56.940287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.668 [2024-11-28 08:29:56.940294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.668 [2024-11-28 08:29:56.940300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.668 [2024-11-28 08:29:56.940314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.668 qpair failed and we were unable to recover it. 00:30:59.668 [2024-11-28 08:29:56.950306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.668 [2024-11-28 08:29:56.950362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.668 [2024-11-28 08:29:56.950375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.668 [2024-11-28 08:29:56.950382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.668 [2024-11-28 08:29:56.950388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.668 [2024-11-28 08:29:56.950403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.668 qpair failed and we were unable to recover it. 00:30:59.932 [2024-11-28 08:29:56.960269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.933 [2024-11-28 08:29:56.960318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.933 [2024-11-28 08:29:56.960331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.933 [2024-11-28 08:29:56.960337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.933 [2024-11-28 08:29:56.960344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.933 [2024-11-28 08:29:56.960358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.933 qpair failed and we were unable to recover it. 00:30:59.933 [2024-11-28 08:29:56.970327] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.933 [2024-11-28 08:29:56.970381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.933 [2024-11-28 08:29:56.970394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.933 [2024-11-28 08:29:56.970401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.933 [2024-11-28 08:29:56.970407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.933 [2024-11-28 08:29:56.970421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.933 qpair failed and we were unable to recover it. 00:30:59.933 [2024-11-28 08:29:56.980321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.933 [2024-11-28 08:29:56.980364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.933 [2024-11-28 08:29:56.980381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.933 [2024-11-28 08:29:56.980388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.933 [2024-11-28 08:29:56.980394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.933 [2024-11-28 08:29:56.980408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.933 qpair failed and we were unable to recover it. 00:30:59.933 [2024-11-28 08:29:56.990387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.933 [2024-11-28 08:29:56.990443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.933 [2024-11-28 08:29:56.990456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.933 [2024-11-28 08:29:56.990463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.933 [2024-11-28 08:29:56.990470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.933 [2024-11-28 08:29:56.990483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.933 qpair failed and we were unable to recover it. 00:30:59.933 [2024-11-28 08:29:57.000402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.933 [2024-11-28 08:29:57.000454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.933 [2024-11-28 08:29:57.000467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.933 [2024-11-28 08:29:57.000474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.933 [2024-11-28 08:29:57.000481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.933 [2024-11-28 08:29:57.000495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.933 qpair failed and we were unable to recover it. 00:30:59.933 [2024-11-28 08:29:57.010455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.933 [2024-11-28 08:29:57.010508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.933 [2024-11-28 08:29:57.010521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.933 [2024-11-28 08:29:57.010528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.933 [2024-11-28 08:29:57.010535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.933 [2024-11-28 08:29:57.010549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.933 qpair failed and we were unable to recover it. 00:30:59.933 [2024-11-28 08:29:57.020426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.933 [2024-11-28 08:29:57.020469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.933 [2024-11-28 08:29:57.020482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.933 [2024-11-28 08:29:57.020489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.933 [2024-11-28 08:29:57.020499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.933 [2024-11-28 08:29:57.020513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.933 qpair failed and we were unable to recover it. 00:30:59.933 [2024-11-28 08:29:57.030496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.933 [2024-11-28 08:29:57.030551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.933 [2024-11-28 08:29:57.030564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.933 [2024-11-28 08:29:57.030571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.933 [2024-11-28 08:29:57.030577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.933 [2024-11-28 08:29:57.030592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.933 qpair failed and we were unable to recover it. 00:30:59.933 [2024-11-28 08:29:57.040480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.933 [2024-11-28 08:29:57.040532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.933 [2024-11-28 08:29:57.040545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.933 [2024-11-28 08:29:57.040552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.933 [2024-11-28 08:29:57.040558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.933 [2024-11-28 08:29:57.040572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.933 qpair failed and we were unable to recover it. 00:30:59.933 [2024-11-28 08:29:57.050533] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.933 [2024-11-28 08:29:57.050581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.933 [2024-11-28 08:29:57.050594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.933 [2024-11-28 08:29:57.050601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.933 [2024-11-28 08:29:57.050607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.933 [2024-11-28 08:29:57.050622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.933 qpair failed and we were unable to recover it. 00:30:59.933 [2024-11-28 08:29:57.060537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.933 [2024-11-28 08:29:57.060625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.933 [2024-11-28 08:29:57.060638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.933 [2024-11-28 08:29:57.060645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.933 [2024-11-28 08:29:57.060651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.933 [2024-11-28 08:29:57.060665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.933 qpair failed and we were unable to recover it. 00:30:59.933 [2024-11-28 08:29:57.070622] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.933 [2024-11-28 08:29:57.070677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.933 [2024-11-28 08:29:57.070690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.933 [2024-11-28 08:29:57.070697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.934 [2024-11-28 08:29:57.070703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.934 [2024-11-28 08:29:57.070717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.934 qpair failed and we were unable to recover it. 00:30:59.934 [2024-11-28 08:29:57.080615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.934 [2024-11-28 08:29:57.080666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.934 [2024-11-28 08:29:57.080679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.934 [2024-11-28 08:29:57.080686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.934 [2024-11-28 08:29:57.080692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.934 [2024-11-28 08:29:57.080706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.934 qpair failed and we were unable to recover it. 00:30:59.934 [2024-11-28 08:29:57.090638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.934 [2024-11-28 08:29:57.090692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.934 [2024-11-28 08:29:57.090705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.934 [2024-11-28 08:29:57.090712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.934 [2024-11-28 08:29:57.090719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.934 [2024-11-28 08:29:57.090732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.934 qpair failed and we were unable to recover it. 00:30:59.934 [2024-11-28 08:29:57.100647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.934 [2024-11-28 08:29:57.100704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.934 [2024-11-28 08:29:57.100718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.934 [2024-11-28 08:29:57.100725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.934 [2024-11-28 08:29:57.100731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.934 [2024-11-28 08:29:57.100745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.934 qpair failed and we were unable to recover it. 00:30:59.934 [2024-11-28 08:29:57.110724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.934 [2024-11-28 08:29:57.110783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.934 [2024-11-28 08:29:57.110799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.934 [2024-11-28 08:29:57.110806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.934 [2024-11-28 08:29:57.110813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.934 [2024-11-28 08:29:57.110827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.934 qpair failed and we were unable to recover it. 00:30:59.934 [2024-11-28 08:29:57.120716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.934 [2024-11-28 08:29:57.120769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.934 [2024-11-28 08:29:57.120782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.934 [2024-11-28 08:29:57.120789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.934 [2024-11-28 08:29:57.120795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.934 [2024-11-28 08:29:57.120809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.934 qpair failed and we were unable to recover it. 00:30:59.934 [2024-11-28 08:29:57.130765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.934 [2024-11-28 08:29:57.130814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.934 [2024-11-28 08:29:57.130827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.934 [2024-11-28 08:29:57.130834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.934 [2024-11-28 08:29:57.130840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.934 [2024-11-28 08:29:57.130854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.934 qpair failed and we were unable to recover it. 00:30:59.934 [2024-11-28 08:29:57.140633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.934 [2024-11-28 08:29:57.140682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.934 [2024-11-28 08:29:57.140695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.934 [2024-11-28 08:29:57.140702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.934 [2024-11-28 08:29:57.140708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.934 [2024-11-28 08:29:57.140722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.934 qpair failed and we were unable to recover it. 00:30:59.934 [2024-11-28 08:29:57.150843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.934 [2024-11-28 08:29:57.150895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.934 [2024-11-28 08:29:57.150908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.934 [2024-11-28 08:29:57.150915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.934 [2024-11-28 08:29:57.150929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.934 [2024-11-28 08:29:57.150944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.934 qpair failed and we were unable to recover it. 00:30:59.934 [2024-11-28 08:29:57.160808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.934 [2024-11-28 08:29:57.160881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.934 [2024-11-28 08:29:57.160895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.934 [2024-11-28 08:29:57.160902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.934 [2024-11-28 08:29:57.160908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.934 [2024-11-28 08:29:57.160922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.934 qpair failed and we were unable to recover it. 00:30:59.934 [2024-11-28 08:29:57.170869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.934 [2024-11-28 08:29:57.170916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.934 [2024-11-28 08:29:57.170929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.934 [2024-11-28 08:29:57.170936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.934 [2024-11-28 08:29:57.170943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.934 [2024-11-28 08:29:57.170956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.934 qpair failed and we were unable to recover it. 00:30:59.934 [2024-11-28 08:29:57.180866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.934 [2024-11-28 08:29:57.180940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.934 [2024-11-28 08:29:57.180965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.934 [2024-11-28 08:29:57.180974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.934 [2024-11-28 08:29:57.180981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.934 [2024-11-28 08:29:57.181000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.934 qpair failed and we were unable to recover it. 00:30:59.934 [2024-11-28 08:29:57.190972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.935 [2024-11-28 08:29:57.191052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.935 [2024-11-28 08:29:57.191077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.935 [2024-11-28 08:29:57.191086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.935 [2024-11-28 08:29:57.191093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.935 [2024-11-28 08:29:57.191113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.935 qpair failed and we were unable to recover it. 00:30:59.935 [2024-11-28 08:29:57.200936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.935 [2024-11-28 08:29:57.200990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.935 [2024-11-28 08:29:57.201006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.935 [2024-11-28 08:29:57.201013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.935 [2024-11-28 08:29:57.201020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.935 [2024-11-28 08:29:57.201036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.935 qpair failed and we were unable to recover it. 00:30:59.935 [2024-11-28 08:29:57.210980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.935 [2024-11-28 08:29:57.211033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.935 [2024-11-28 08:29:57.211047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.935 [2024-11-28 08:29:57.211054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.935 [2024-11-28 08:29:57.211060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:30:59.935 [2024-11-28 08:29:57.211075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:59.935 qpair failed and we were unable to recover it. 00:31:00.198 [2024-11-28 08:29:57.220960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.198 [2024-11-28 08:29:57.221014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.198 [2024-11-28 08:29:57.221027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.198 [2024-11-28 08:29:57.221035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.198 [2024-11-28 08:29:57.221041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.198 [2024-11-28 08:29:57.221056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.198 qpair failed and we were unable to recover it. 00:31:00.198 [2024-11-28 08:29:57.231035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.198 [2024-11-28 08:29:57.231084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.198 [2024-11-28 08:29:57.231098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.198 [2024-11-28 08:29:57.231105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.198 [2024-11-28 08:29:57.231111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.199 [2024-11-28 08:29:57.231125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.199 qpair failed and we were unable to recover it. 00:31:00.199 [2024-11-28 08:29:57.240934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.199 [2024-11-28 08:29:57.240989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.199 [2024-11-28 08:29:57.241006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.199 [2024-11-28 08:29:57.241013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.199 [2024-11-28 08:29:57.241019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.199 [2024-11-28 08:29:57.241034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.199 qpair failed and we were unable to recover it. 00:31:00.199 [2024-11-28 08:29:57.251100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.199 [2024-11-28 08:29:57.251154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.199 [2024-11-28 08:29:57.251172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.199 [2024-11-28 08:29:57.251179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.199 [2024-11-28 08:29:57.251186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.199 [2024-11-28 08:29:57.251200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.199 qpair failed and we were unable to recover it. 00:31:00.199 [2024-11-28 08:29:57.261073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.199 [2024-11-28 08:29:57.261119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.199 [2024-11-28 08:29:57.261133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.199 [2024-11-28 08:29:57.261140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.199 [2024-11-28 08:29:57.261146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.199 [2024-11-28 08:29:57.261165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.199 qpair failed and we were unable to recover it. 00:31:00.199 [2024-11-28 08:29:57.271025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.199 [2024-11-28 08:29:57.271079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.199 [2024-11-28 08:29:57.271092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.199 [2024-11-28 08:29:57.271099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.199 [2024-11-28 08:29:57.271105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.199 [2024-11-28 08:29:57.271119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.199 qpair failed and we were unable to recover it. 00:31:00.199 [2024-11-28 08:29:57.281126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.199 [2024-11-28 08:29:57.281179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.199 [2024-11-28 08:29:57.281193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.199 [2024-11-28 08:29:57.281204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.199 [2024-11-28 08:29:57.281210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.199 [2024-11-28 08:29:57.281225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.199 qpair failed and we were unable to recover it. 00:31:00.199 [2024-11-28 08:29:57.291221] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.199 [2024-11-28 08:29:57.291276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.199 [2024-11-28 08:29:57.291290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.199 [2024-11-28 08:29:57.291297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.199 [2024-11-28 08:29:57.291304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.199 [2024-11-28 08:29:57.291319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.199 qpair failed and we were unable to recover it. 00:31:00.199 [2024-11-28 08:29:57.301188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.199 [2024-11-28 08:29:57.301239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.199 [2024-11-28 08:29:57.301252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.199 [2024-11-28 08:29:57.301259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.199 [2024-11-28 08:29:57.301266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.199 [2024-11-28 08:29:57.301280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.199 qpair failed and we were unable to recover it. 00:31:00.199 [2024-11-28 08:29:57.311314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.199 [2024-11-28 08:29:57.311369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.199 [2024-11-28 08:29:57.311382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.199 [2024-11-28 08:29:57.311389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.199 [2024-11-28 08:29:57.311395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.199 [2024-11-28 08:29:57.311409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.199 qpair failed and we were unable to recover it. 00:31:00.199 [2024-11-28 08:29:57.321268] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.199 [2024-11-28 08:29:57.321321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.199 [2024-11-28 08:29:57.321334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.199 [2024-11-28 08:29:57.321341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.199 [2024-11-28 08:29:57.321347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.199 [2024-11-28 08:29:57.321361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.199 qpair failed and we were unable to recover it. 00:31:00.199 [2024-11-28 08:29:57.331299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.199 [2024-11-28 08:29:57.331356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.199 [2024-11-28 08:29:57.331371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.199 [2024-11-28 08:29:57.331378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.199 [2024-11-28 08:29:57.331385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.199 [2024-11-28 08:29:57.331404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.199 qpair failed and we were unable to recover it. 00:31:00.199 [2024-11-28 08:29:57.341290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.199 [2024-11-28 08:29:57.341383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.199 [2024-11-28 08:29:57.341397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.199 [2024-11-28 08:29:57.341404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.199 [2024-11-28 08:29:57.341410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.199 [2024-11-28 08:29:57.341425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.199 qpair failed and we were unable to recover it. 00:31:00.199 [2024-11-28 08:29:57.351368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.199 [2024-11-28 08:29:57.351422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.199 [2024-11-28 08:29:57.351435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.199 [2024-11-28 08:29:57.351442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.199 [2024-11-28 08:29:57.351449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.199 [2024-11-28 08:29:57.351463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.199 qpair failed and we were unable to recover it. 00:31:00.199 [2024-11-28 08:29:57.361395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.199 [2024-11-28 08:29:57.361445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.199 [2024-11-28 08:29:57.361458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.200 [2024-11-28 08:29:57.361465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.200 [2024-11-28 08:29:57.361472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.200 [2024-11-28 08:29:57.361486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.200 qpair failed and we were unable to recover it. 00:31:00.200 [2024-11-28 08:29:57.371420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.200 [2024-11-28 08:29:57.371477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.200 [2024-11-28 08:29:57.371490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.200 [2024-11-28 08:29:57.371497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.200 [2024-11-28 08:29:57.371504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.200 [2024-11-28 08:29:57.371517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.200 qpair failed and we were unable to recover it. 00:31:00.200 [2024-11-28 08:29:57.381367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.200 [2024-11-28 08:29:57.381411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.200 [2024-11-28 08:29:57.381425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.200 [2024-11-28 08:29:57.381432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.200 [2024-11-28 08:29:57.381438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.200 [2024-11-28 08:29:57.381452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.200 qpair failed and we were unable to recover it. 00:31:00.200 [2024-11-28 08:29:57.391503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.200 [2024-11-28 08:29:57.391558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.200 [2024-11-28 08:29:57.391571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.200 [2024-11-28 08:29:57.391578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.200 [2024-11-28 08:29:57.391585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.200 [2024-11-28 08:29:57.391599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.200 qpair failed and we were unable to recover it. 00:31:00.200 [2024-11-28 08:29:57.401472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.200 [2024-11-28 08:29:57.401571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.200 [2024-11-28 08:29:57.401585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.200 [2024-11-28 08:29:57.401592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.200 [2024-11-28 08:29:57.401598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.200 [2024-11-28 08:29:57.401611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.200 qpair failed and we were unable to recover it. 00:31:00.200 [2024-11-28 08:29:57.411488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.200 [2024-11-28 08:29:57.411539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.200 [2024-11-28 08:29:57.411552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.200 [2024-11-28 08:29:57.411562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.200 [2024-11-28 08:29:57.411569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.200 [2024-11-28 08:29:57.411583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.200 qpair failed and we were unable to recover it. 00:31:00.200 [2024-11-28 08:29:57.421507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.200 [2024-11-28 08:29:57.421558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.200 [2024-11-28 08:29:57.421572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.200 [2024-11-28 08:29:57.421578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.200 [2024-11-28 08:29:57.421585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.200 [2024-11-28 08:29:57.421599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.200 qpair failed and we were unable to recover it. 00:31:00.200 [2024-11-28 08:29:57.431581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.200 [2024-11-28 08:29:57.431634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.200 [2024-11-28 08:29:57.431646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.200 [2024-11-28 08:29:57.431653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.200 [2024-11-28 08:29:57.431659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.200 [2024-11-28 08:29:57.431673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.200 qpair failed and we were unable to recover it. 00:31:00.200 [2024-11-28 08:29:57.441580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.200 [2024-11-28 08:29:57.441634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.200 [2024-11-28 08:29:57.441647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.200 [2024-11-28 08:29:57.441654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.200 [2024-11-28 08:29:57.441660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.200 [2024-11-28 08:29:57.441674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.200 qpair failed and we were unable to recover it. 00:31:00.200 [2024-11-28 08:29:57.451625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.200 [2024-11-28 08:29:57.451682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.200 [2024-11-28 08:29:57.451695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.200 [2024-11-28 08:29:57.451702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.200 [2024-11-28 08:29:57.451708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.200 [2024-11-28 08:29:57.451726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.200 qpair failed and we were unable to recover it. 00:31:00.200 [2024-11-28 08:29:57.461589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.200 [2024-11-28 08:29:57.461639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.200 [2024-11-28 08:29:57.461652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.200 [2024-11-28 08:29:57.461659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.200 [2024-11-28 08:29:57.461665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.200 [2024-11-28 08:29:57.461679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.200 qpair failed and we were unable to recover it. 00:31:00.200 [2024-11-28 08:29:57.471669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.200 [2024-11-28 08:29:57.471723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.200 [2024-11-28 08:29:57.471736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.200 [2024-11-28 08:29:57.471743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.200 [2024-11-28 08:29:57.471750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.200 [2024-11-28 08:29:57.471763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.200 qpair failed and we were unable to recover it. 00:31:00.200 [2024-11-28 08:29:57.481709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.200 [2024-11-28 08:29:57.481754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.200 [2024-11-28 08:29:57.481768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.200 [2024-11-28 08:29:57.481775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.200 [2024-11-28 08:29:57.481781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.200 [2024-11-28 08:29:57.481795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.200 qpair failed and we were unable to recover it. 00:31:00.464 [2024-11-28 08:29:57.491736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.464 [2024-11-28 08:29:57.491794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.464 [2024-11-28 08:29:57.491807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.464 [2024-11-28 08:29:57.491814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.464 [2024-11-28 08:29:57.491821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.464 [2024-11-28 08:29:57.491834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.464 qpair failed and we were unable to recover it. 00:31:00.464 [2024-11-28 08:29:57.501726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.464 [2024-11-28 08:29:57.501779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.464 [2024-11-28 08:29:57.501792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.464 [2024-11-28 08:29:57.501800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.464 [2024-11-28 08:29:57.501806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.464 [2024-11-28 08:29:57.501820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.464 qpair failed and we were unable to recover it. 00:31:00.464 [2024-11-28 08:29:57.511814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.464 [2024-11-28 08:29:57.511869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.464 [2024-11-28 08:29:57.511883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.464 [2024-11-28 08:29:57.511889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.464 [2024-11-28 08:29:57.511896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.464 [2024-11-28 08:29:57.511910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.464 qpair failed and we were unable to recover it. 00:31:00.464 [2024-11-28 08:29:57.521809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.464 [2024-11-28 08:29:57.521855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.464 [2024-11-28 08:29:57.521868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.464 [2024-11-28 08:29:57.521875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.464 [2024-11-28 08:29:57.521882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.464 [2024-11-28 08:29:57.521896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.464 qpair failed and we were unable to recover it. 00:31:00.464 [2024-11-28 08:29:57.531865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.464 [2024-11-28 08:29:57.531918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.464 [2024-11-28 08:29:57.531943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.464 [2024-11-28 08:29:57.531952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.464 [2024-11-28 08:29:57.531959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.464 [2024-11-28 08:29:57.531978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.464 qpair failed and we were unable to recover it. 00:31:00.464 [2024-11-28 08:29:57.541818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.464 [2024-11-28 08:29:57.541878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.464 [2024-11-28 08:29:57.541907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.464 [2024-11-28 08:29:57.541917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.464 [2024-11-28 08:29:57.541924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.464 [2024-11-28 08:29:57.541944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.464 qpair failed and we were unable to recover it. 00:31:00.464 [2024-11-28 08:29:57.551871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.464 [2024-11-28 08:29:57.551928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.464 [2024-11-28 08:29:57.551954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.464 [2024-11-28 08:29:57.551962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.464 [2024-11-28 08:29:57.551969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.464 [2024-11-28 08:29:57.551988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.465 qpair failed and we were unable to recover it. 00:31:00.465 [2024-11-28 08:29:57.561909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.465 [2024-11-28 08:29:57.561960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.465 [2024-11-28 08:29:57.561975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.465 [2024-11-28 08:29:57.561982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.465 [2024-11-28 08:29:57.561988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.465 [2024-11-28 08:29:57.562003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.465 qpair failed and we were unable to recover it. 00:31:00.465 [2024-11-28 08:29:57.571921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.465 [2024-11-28 08:29:57.571974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.465 [2024-11-28 08:29:57.571987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.465 [2024-11-28 08:29:57.571994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.465 [2024-11-28 08:29:57.572000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.465 [2024-11-28 08:29:57.572015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.465 qpair failed and we were unable to recover it. 00:31:00.465 [2024-11-28 08:29:57.581925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.465 [2024-11-28 08:29:57.581978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.465 [2024-11-28 08:29:57.581992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.465 [2024-11-28 08:29:57.581999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.465 [2024-11-28 08:29:57.582010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.465 [2024-11-28 08:29:57.582024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.465 qpair failed and we were unable to recover it. 00:31:00.465 [2024-11-28 08:29:57.592025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.465 [2024-11-28 08:29:57.592079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.465 [2024-11-28 08:29:57.592093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.465 [2024-11-28 08:29:57.592100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.465 [2024-11-28 08:29:57.592106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.465 [2024-11-28 08:29:57.592120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.465 qpair failed and we were unable to recover it. 00:31:00.465 [2024-11-28 08:29:57.602027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.465 [2024-11-28 08:29:57.602076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.465 [2024-11-28 08:29:57.602089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.465 [2024-11-28 08:29:57.602096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.465 [2024-11-28 08:29:57.602103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.465 [2024-11-28 08:29:57.602117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.465 qpair failed and we were unable to recover it. 00:31:00.465 [2024-11-28 08:29:57.612041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.465 [2024-11-28 08:29:57.612100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.465 [2024-11-28 08:29:57.612115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.465 [2024-11-28 08:29:57.612124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.465 [2024-11-28 08:29:57.612130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.465 [2024-11-28 08:29:57.612144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.465 qpair failed and we were unable to recover it. 00:31:00.465 [2024-11-28 08:29:57.622067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.465 [2024-11-28 08:29:57.622124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.465 [2024-11-28 08:29:57.622138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.465 [2024-11-28 08:29:57.622145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.465 [2024-11-28 08:29:57.622151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.465 [2024-11-28 08:29:57.622169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.465 qpair failed and we were unable to recover it. 00:31:00.465 [2024-11-28 08:29:57.632184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.465 [2024-11-28 08:29:57.632256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.465 [2024-11-28 08:29:57.632269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.465 [2024-11-28 08:29:57.632276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.465 [2024-11-28 08:29:57.632283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.465 [2024-11-28 08:29:57.632297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.465 qpair failed and we were unable to recover it. 00:31:00.465 [2024-11-28 08:29:57.642136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.465 [2024-11-28 08:29:57.642194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.465 [2024-11-28 08:29:57.642208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.465 [2024-11-28 08:29:57.642215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.465 [2024-11-28 08:29:57.642221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.465 [2024-11-28 08:29:57.642235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.465 qpair failed and we were unable to recover it. 00:31:00.465 [2024-11-28 08:29:57.652164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.465 [2024-11-28 08:29:57.652215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.465 [2024-11-28 08:29:57.652229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.465 [2024-11-28 08:29:57.652236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.465 [2024-11-28 08:29:57.652242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.465 [2024-11-28 08:29:57.652257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.465 qpair failed and we were unable to recover it. 00:31:00.465 [2024-11-28 08:29:57.662039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.465 [2024-11-28 08:29:57.662086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.465 [2024-11-28 08:29:57.662100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.465 [2024-11-28 08:29:57.662107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.465 [2024-11-28 08:29:57.662114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.465 [2024-11-28 08:29:57.662128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.465 qpair failed and we were unable to recover it. 00:31:00.465 [2024-11-28 08:29:57.672241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.465 [2024-11-28 08:29:57.672340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.465 [2024-11-28 08:29:57.672357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.465 [2024-11-28 08:29:57.672364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.465 [2024-11-28 08:29:57.672370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.465 [2024-11-28 08:29:57.672385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.465 qpair failed and we were unable to recover it. 00:31:00.465 [2024-11-28 08:29:57.682240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.465 [2024-11-28 08:29:57.682292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.465 [2024-11-28 08:29:57.682305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.465 [2024-11-28 08:29:57.682312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.465 [2024-11-28 08:29:57.682319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.465 [2024-11-28 08:29:57.682333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.465 qpair failed and we were unable to recover it. 00:31:00.466 [2024-11-28 08:29:57.692267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.466 [2024-11-28 08:29:57.692326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.466 [2024-11-28 08:29:57.692340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.466 [2024-11-28 08:29:57.692347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.466 [2024-11-28 08:29:57.692353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.466 [2024-11-28 08:29:57.692368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.466 qpair failed and we were unable to recover it. 00:31:00.466 [2024-11-28 08:29:57.702259] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.466 [2024-11-28 08:29:57.702327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.466 [2024-11-28 08:29:57.702340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.466 [2024-11-28 08:29:57.702347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.466 [2024-11-28 08:29:57.702353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.466 [2024-11-28 08:29:57.702367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.466 qpair failed and we were unable to recover it. 00:31:00.466 [2024-11-28 08:29:57.712365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.466 [2024-11-28 08:29:57.712421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.466 [2024-11-28 08:29:57.712434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.466 [2024-11-28 08:29:57.712441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.466 [2024-11-28 08:29:57.712451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.466 [2024-11-28 08:29:57.712465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.466 qpair failed and we were unable to recover it. 00:31:00.466 [2024-11-28 08:29:57.722339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.466 [2024-11-28 08:29:57.722388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.466 [2024-11-28 08:29:57.722401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.466 [2024-11-28 08:29:57.722408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.466 [2024-11-28 08:29:57.722414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.466 [2024-11-28 08:29:57.722428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.466 qpair failed and we were unable to recover it. 00:31:00.466 [2024-11-28 08:29:57.732395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.466 [2024-11-28 08:29:57.732449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.466 [2024-11-28 08:29:57.732462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.466 [2024-11-28 08:29:57.732469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.466 [2024-11-28 08:29:57.732476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.466 [2024-11-28 08:29:57.732489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.466 qpair failed and we were unable to recover it. 00:31:00.466 [2024-11-28 08:29:57.742394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.466 [2024-11-28 08:29:57.742440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.466 [2024-11-28 08:29:57.742454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.466 [2024-11-28 08:29:57.742461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.466 [2024-11-28 08:29:57.742467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.466 [2024-11-28 08:29:57.742481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.466 qpair failed and we were unable to recover it. 00:31:00.729 [2024-11-28 08:29:57.752513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.729 [2024-11-28 08:29:57.752566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.729 [2024-11-28 08:29:57.752579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.729 [2024-11-28 08:29:57.752586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.729 [2024-11-28 08:29:57.752592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.729 [2024-11-28 08:29:57.752606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.729 qpair failed and we were unable to recover it. 00:31:00.729 [2024-11-28 08:29:57.762429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.729 [2024-11-28 08:29:57.762475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.729 [2024-11-28 08:29:57.762488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.729 [2024-11-28 08:29:57.762495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.729 [2024-11-28 08:29:57.762501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.729 [2024-11-28 08:29:57.762515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.729 qpair failed and we were unable to recover it. 00:31:00.729 [2024-11-28 08:29:57.772509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.729 [2024-11-28 08:29:57.772559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.729 [2024-11-28 08:29:57.772572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.729 [2024-11-28 08:29:57.772579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.729 [2024-11-28 08:29:57.772585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.729 [2024-11-28 08:29:57.772599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.729 qpair failed and we were unable to recover it. 00:31:00.729 [2024-11-28 08:29:57.782483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.729 [2024-11-28 08:29:57.782528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.729 [2024-11-28 08:29:57.782541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.729 [2024-11-28 08:29:57.782548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.729 [2024-11-28 08:29:57.782555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.729 [2024-11-28 08:29:57.782568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.729 qpair failed and we were unable to recover it. 00:31:00.729 [2024-11-28 08:29:57.792565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.729 [2024-11-28 08:29:57.792620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.729 [2024-11-28 08:29:57.792634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.729 [2024-11-28 08:29:57.792641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.729 [2024-11-28 08:29:57.792647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.729 [2024-11-28 08:29:57.792661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.729 qpair failed and we were unable to recover it. 00:31:00.729 [2024-11-28 08:29:57.802562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.729 [2024-11-28 08:29:57.802612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.730 [2024-11-28 08:29:57.802628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.730 [2024-11-28 08:29:57.802635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.730 [2024-11-28 08:29:57.802642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.730 [2024-11-28 08:29:57.802656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.730 qpair failed and we were unable to recover it. 00:31:00.730 [2024-11-28 08:29:57.812612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.730 [2024-11-28 08:29:57.812671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.730 [2024-11-28 08:29:57.812685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.730 [2024-11-28 08:29:57.812692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.730 [2024-11-28 08:29:57.812698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.730 [2024-11-28 08:29:57.812712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.730 qpair failed and we were unable to recover it. 00:31:00.730 [2024-11-28 08:29:57.822603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.730 [2024-11-28 08:29:57.822652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.730 [2024-11-28 08:29:57.822665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.730 [2024-11-28 08:29:57.822672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.730 [2024-11-28 08:29:57.822679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.730 [2024-11-28 08:29:57.822693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.730 qpair failed and we were unable to recover it. 00:31:00.730 [2024-11-28 08:29:57.832677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.730 [2024-11-28 08:29:57.832732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.730 [2024-11-28 08:29:57.832745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.730 [2024-11-28 08:29:57.832752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.730 [2024-11-28 08:29:57.832758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.730 [2024-11-28 08:29:57.832772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.730 qpair failed and we were unable to recover it. 00:31:00.730 [2024-11-28 08:29:57.842685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.730 [2024-11-28 08:29:57.842742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.730 [2024-11-28 08:29:57.842756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.730 [2024-11-28 08:29:57.842770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.730 [2024-11-28 08:29:57.842777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.730 [2024-11-28 08:29:57.842791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.730 qpair failed and we were unable to recover it. 00:31:00.730 [2024-11-28 08:29:57.852617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.730 [2024-11-28 08:29:57.852721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.730 [2024-11-28 08:29:57.852734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.730 [2024-11-28 08:29:57.852741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.730 [2024-11-28 08:29:57.852747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.730 [2024-11-28 08:29:57.852761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.730 qpair failed and we were unable to recover it. 00:31:00.730 [2024-11-28 08:29:57.862702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.730 [2024-11-28 08:29:57.862748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.730 [2024-11-28 08:29:57.862761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.730 [2024-11-28 08:29:57.862768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.730 [2024-11-28 08:29:57.862774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.730 [2024-11-28 08:29:57.862788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.730 qpair failed and we were unable to recover it. 00:31:00.730 [2024-11-28 08:29:57.872780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.730 [2024-11-28 08:29:57.872860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.730 [2024-11-28 08:29:57.872873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.730 [2024-11-28 08:29:57.872880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.730 [2024-11-28 08:29:57.872887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.730 [2024-11-28 08:29:57.872901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.730 qpair failed and we were unable to recover it. 00:31:00.730 [2024-11-28 08:29:57.882775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.730 [2024-11-28 08:29:57.882835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.730 [2024-11-28 08:29:57.882850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.730 [2024-11-28 08:29:57.882857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.730 [2024-11-28 08:29:57.882864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.730 [2024-11-28 08:29:57.882886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.730 qpair failed and we were unable to recover it. 00:31:00.730 [2024-11-28 08:29:57.892712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.730 [2024-11-28 08:29:57.892765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.730 [2024-11-28 08:29:57.892779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.730 [2024-11-28 08:29:57.892786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.730 [2024-11-28 08:29:57.892792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.730 [2024-11-28 08:29:57.892806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.730 qpair failed and we were unable to recover it. 00:31:00.730 [2024-11-28 08:29:57.902866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.730 [2024-11-28 08:29:57.902950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.730 [2024-11-28 08:29:57.902964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.730 [2024-11-28 08:29:57.902971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.730 [2024-11-28 08:29:57.902977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.730 [2024-11-28 08:29:57.902992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.730 qpair failed and we were unable to recover it. 00:31:00.730 [2024-11-28 08:29:57.912931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.730 [2024-11-28 08:29:57.913003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.730 [2024-11-28 08:29:57.913029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.730 [2024-11-28 08:29:57.913038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.730 [2024-11-28 08:29:57.913045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.730 [2024-11-28 08:29:57.913065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.730 qpair failed and we were unable to recover it. 00:31:00.730 [2024-11-28 08:29:57.922869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.730 [2024-11-28 08:29:57.922921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.730 [2024-11-28 08:29:57.922937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.730 [2024-11-28 08:29:57.922944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.730 [2024-11-28 08:29:57.922950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.730 [2024-11-28 08:29:57.922965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.730 qpair failed and we were unable to recover it. 00:31:00.730 [2024-11-28 08:29:57.932910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.730 [2024-11-28 08:29:57.932971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.730 [2024-11-28 08:29:57.932985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.731 [2024-11-28 08:29:57.932992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.731 [2024-11-28 08:29:57.932998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.731 [2024-11-28 08:29:57.933013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.731 qpair failed and we were unable to recover it. 00:31:00.731 [2024-11-28 08:29:57.942937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.731 [2024-11-28 08:29:57.943005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.731 [2024-11-28 08:29:57.943019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.731 [2024-11-28 08:29:57.943026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.731 [2024-11-28 08:29:57.943032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.731 [2024-11-28 08:29:57.943046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.731 qpair failed and we were unable to recover it. 00:31:00.731 [2024-11-28 08:29:57.953025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.731 [2024-11-28 08:29:57.953081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.731 [2024-11-28 08:29:57.953094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.731 [2024-11-28 08:29:57.953101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.731 [2024-11-28 08:29:57.953107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.731 [2024-11-28 08:29:57.953121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.731 qpair failed and we were unable to recover it. 00:31:00.731 [2024-11-28 08:29:57.963009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.731 [2024-11-28 08:29:57.963055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.731 [2024-11-28 08:29:57.963069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.731 [2024-11-28 08:29:57.963075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.731 [2024-11-28 08:29:57.963082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.731 [2024-11-28 08:29:57.963096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.731 qpair failed and we were unable to recover it. 00:31:00.731 [2024-11-28 08:29:57.973076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.731 [2024-11-28 08:29:57.973124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.731 [2024-11-28 08:29:57.973137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.731 [2024-11-28 08:29:57.973148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.731 [2024-11-28 08:29:57.973155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.731 [2024-11-28 08:29:57.973173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.731 qpair failed and we were unable to recover it. 00:31:00.731 [2024-11-28 08:29:57.983022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.731 [2024-11-28 08:29:57.983083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.731 [2024-11-28 08:29:57.983098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.731 [2024-11-28 08:29:57.983106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.731 [2024-11-28 08:29:57.983116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.731 [2024-11-28 08:29:57.983131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.731 qpair failed and we were unable to recover it. 00:31:00.731 [2024-11-28 08:29:57.992995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.731 [2024-11-28 08:29:57.993047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.731 [2024-11-28 08:29:57.993061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.731 [2024-11-28 08:29:57.993068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.731 [2024-11-28 08:29:57.993074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.731 [2024-11-28 08:29:57.993089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.731 qpair failed and we were unable to recover it. 00:31:00.731 [2024-11-28 08:29:58.003112] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.731 [2024-11-28 08:29:58.003169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.731 [2024-11-28 08:29:58.003183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.731 [2024-11-28 08:29:58.003190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.731 [2024-11-28 08:29:58.003196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.731 [2024-11-28 08:29:58.003211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.731 qpair failed and we were unable to recover it. 00:31:00.731 [2024-11-28 08:29:58.013166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.731 [2024-11-28 08:29:58.013218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.731 [2024-11-28 08:29:58.013231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.731 [2024-11-28 08:29:58.013237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.731 [2024-11-28 08:29:58.013244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.731 [2024-11-28 08:29:58.013262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.731 qpair failed and we were unable to recover it. 00:31:00.995 [2024-11-28 08:29:58.023154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.995 [2024-11-28 08:29:58.023232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.995 [2024-11-28 08:29:58.023245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.995 [2024-11-28 08:29:58.023252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.995 [2024-11-28 08:29:58.023258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.995 [2024-11-28 08:29:58.023273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.995 qpair failed and we were unable to recover it. 00:31:00.995 [2024-11-28 08:29:58.033243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.995 [2024-11-28 08:29:58.033299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.995 [2024-11-28 08:29:58.033312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.995 [2024-11-28 08:29:58.033319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.995 [2024-11-28 08:29:58.033325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.995 [2024-11-28 08:29:58.033339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.995 qpair failed and we were unable to recover it. 00:31:00.995 [2024-11-28 08:29:58.043191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.995 [2024-11-28 08:29:58.043248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.995 [2024-11-28 08:29:58.043261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.995 [2024-11-28 08:29:58.043268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.995 [2024-11-28 08:29:58.043275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.995 [2024-11-28 08:29:58.043289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.995 qpair failed and we were unable to recover it. 00:31:00.995 [2024-11-28 08:29:58.053263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.995 [2024-11-28 08:29:58.053314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.995 [2024-11-28 08:29:58.053327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.995 [2024-11-28 08:29:58.053334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.995 [2024-11-28 08:29:58.053340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.995 [2024-11-28 08:29:58.053354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.995 qpair failed and we were unable to recover it. 00:31:00.995 [2024-11-28 08:29:58.063254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.995 [2024-11-28 08:29:58.063301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.995 [2024-11-28 08:29:58.063315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.995 [2024-11-28 08:29:58.063322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.996 [2024-11-28 08:29:58.063329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.996 [2024-11-28 08:29:58.063343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.996 qpair failed and we were unable to recover it. 00:31:00.996 [2024-11-28 08:29:58.073313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.996 [2024-11-28 08:29:58.073367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.996 [2024-11-28 08:29:58.073380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.996 [2024-11-28 08:29:58.073387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.996 [2024-11-28 08:29:58.073394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.996 [2024-11-28 08:29:58.073408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.996 qpair failed and we were unable to recover it. 00:31:00.996 [2024-11-28 08:29:58.083317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.996 [2024-11-28 08:29:58.083374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.996 [2024-11-28 08:29:58.083387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.996 [2024-11-28 08:29:58.083394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.996 [2024-11-28 08:29:58.083400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.996 [2024-11-28 08:29:58.083414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.996 qpair failed and we were unable to recover it. 00:31:00.996 [2024-11-28 08:29:58.093376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.996 [2024-11-28 08:29:58.093426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.996 [2024-11-28 08:29:58.093439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.996 [2024-11-28 08:29:58.093446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.996 [2024-11-28 08:29:58.093452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.996 [2024-11-28 08:29:58.093466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.996 qpair failed and we were unable to recover it. 00:31:00.996 [2024-11-28 08:29:58.103364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.996 [2024-11-28 08:29:58.103413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.996 [2024-11-28 08:29:58.103429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.996 [2024-11-28 08:29:58.103437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.996 [2024-11-28 08:29:58.103443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.996 [2024-11-28 08:29:58.103457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.996 qpair failed and we were unable to recover it. 00:31:00.996 [2024-11-28 08:29:58.113445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.996 [2024-11-28 08:29:58.113547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.996 [2024-11-28 08:29:58.113560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.996 [2024-11-28 08:29:58.113567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.996 [2024-11-28 08:29:58.113574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.996 [2024-11-28 08:29:58.113587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.996 qpair failed and we were unable to recover it. 00:31:00.996 [2024-11-28 08:29:58.123440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.996 [2024-11-28 08:29:58.123495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.996 [2024-11-28 08:29:58.123508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.996 [2024-11-28 08:29:58.123515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.996 [2024-11-28 08:29:58.123521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.996 [2024-11-28 08:29:58.123535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.996 qpair failed and we were unable to recover it. 00:31:00.996 [2024-11-28 08:29:58.133492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.996 [2024-11-28 08:29:58.133538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.996 [2024-11-28 08:29:58.133551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.996 [2024-11-28 08:29:58.133558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.996 [2024-11-28 08:29:58.133565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.996 [2024-11-28 08:29:58.133579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.996 qpair failed and we were unable to recover it. 00:31:00.996 [2024-11-28 08:29:58.143476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.996 [2024-11-28 08:29:58.143523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.996 [2024-11-28 08:29:58.143536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.996 [2024-11-28 08:29:58.143543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.996 [2024-11-28 08:29:58.143553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.996 [2024-11-28 08:29:58.143567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.996 qpair failed and we were unable to recover it. 00:31:00.996 [2024-11-28 08:29:58.153566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.996 [2024-11-28 08:29:58.153617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.996 [2024-11-28 08:29:58.153630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.996 [2024-11-28 08:29:58.153637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.996 [2024-11-28 08:29:58.153643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.996 [2024-11-28 08:29:58.153657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.996 qpair failed and we were unable to recover it. 00:31:00.996 [2024-11-28 08:29:58.163417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.996 [2024-11-28 08:29:58.163468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.996 [2024-11-28 08:29:58.163481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.996 [2024-11-28 08:29:58.163488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.996 [2024-11-28 08:29:58.163494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.996 [2024-11-28 08:29:58.163508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.996 qpair failed and we were unable to recover it. 00:31:00.996 [2024-11-28 08:29:58.173585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.996 [2024-11-28 08:29:58.173635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.996 [2024-11-28 08:29:58.173648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.996 [2024-11-28 08:29:58.173655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.996 [2024-11-28 08:29:58.173661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.996 [2024-11-28 08:29:58.173675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.996 qpair failed and we were unable to recover it. 00:31:00.996 [2024-11-28 08:29:58.183581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.996 [2024-11-28 08:29:58.183638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.996 [2024-11-28 08:29:58.183652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.996 [2024-11-28 08:29:58.183659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.996 [2024-11-28 08:29:58.183665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.996 [2024-11-28 08:29:58.183679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.996 qpair failed and we were unable to recover it. 00:31:00.996 [2024-11-28 08:29:58.193525] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.996 [2024-11-28 08:29:58.193581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.996 [2024-11-28 08:29:58.193595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.996 [2024-11-28 08:29:58.193602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.996 [2024-11-28 08:29:58.193608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.997 [2024-11-28 08:29:58.193622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.997 qpair failed and we were unable to recover it. 00:31:00.997 [2024-11-28 08:29:58.203613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.997 [2024-11-28 08:29:58.203670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.997 [2024-11-28 08:29:58.203683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.997 [2024-11-28 08:29:58.203689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.997 [2024-11-28 08:29:58.203696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.997 [2024-11-28 08:29:58.203710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.997 qpair failed and we were unable to recover it. 00:31:00.997 [2024-11-28 08:29:58.213648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.997 [2024-11-28 08:29:58.213694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.997 [2024-11-28 08:29:58.213707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.997 [2024-11-28 08:29:58.213714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.997 [2024-11-28 08:29:58.213721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.997 [2024-11-28 08:29:58.213734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.997 qpair failed and we were unable to recover it. 00:31:00.997 [2024-11-28 08:29:58.223664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.997 [2024-11-28 08:29:58.223706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.997 [2024-11-28 08:29:58.223719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.997 [2024-11-28 08:29:58.223726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.997 [2024-11-28 08:29:58.223732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.997 [2024-11-28 08:29:58.223746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.997 qpair failed and we were unable to recover it. 00:31:00.997 [2024-11-28 08:29:58.233794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.997 [2024-11-28 08:29:58.233848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.997 [2024-11-28 08:29:58.233864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.997 [2024-11-28 08:29:58.233871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.997 [2024-11-28 08:29:58.233877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.997 [2024-11-28 08:29:58.233891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.997 qpair failed and we were unable to recover it. 00:31:00.997 [2024-11-28 08:29:58.243777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.997 [2024-11-28 08:29:58.243827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.997 [2024-11-28 08:29:58.243840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.997 [2024-11-28 08:29:58.243847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.997 [2024-11-28 08:29:58.243853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.997 [2024-11-28 08:29:58.243867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.997 qpair failed and we were unable to recover it. 00:31:00.997 [2024-11-28 08:29:58.253771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.997 [2024-11-28 08:29:58.253835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.997 [2024-11-28 08:29:58.253848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.997 [2024-11-28 08:29:58.253855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.997 [2024-11-28 08:29:58.253862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.997 [2024-11-28 08:29:58.253876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.997 qpair failed and we were unable to recover it. 00:31:00.997 [2024-11-28 08:29:58.263792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.997 [2024-11-28 08:29:58.263838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.997 [2024-11-28 08:29:58.263851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.997 [2024-11-28 08:29:58.263858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.997 [2024-11-28 08:29:58.263865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.997 [2024-11-28 08:29:58.263878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.997 qpair failed and we were unable to recover it. 00:31:00.997 [2024-11-28 08:29:58.273860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.997 [2024-11-28 08:29:58.273938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.997 [2024-11-28 08:29:58.273951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.997 [2024-11-28 08:29:58.273958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.997 [2024-11-28 08:29:58.273968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:00.997 [2024-11-28 08:29:58.273982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:00.997 qpair failed and we were unable to recover it. 00:31:01.261 [2024-11-28 08:29:58.283865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.261 [2024-11-28 08:29:58.283918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.261 [2024-11-28 08:29:58.283943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.261 [2024-11-28 08:29:58.283952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.261 [2024-11-28 08:29:58.283959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.261 [2024-11-28 08:29:58.283979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.261 qpair failed and we were unable to recover it. 00:31:01.261 [2024-11-28 08:29:58.293875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.261 [2024-11-28 08:29:58.293926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.261 [2024-11-28 08:29:58.293951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.261 [2024-11-28 08:29:58.293959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.261 [2024-11-28 08:29:58.293966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.261 [2024-11-28 08:29:58.293985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.261 qpair failed and we were unable to recover it. 00:31:01.261 [2024-11-28 08:29:58.303779] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.261 [2024-11-28 08:29:58.303827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.261 [2024-11-28 08:29:58.303842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.261 [2024-11-28 08:29:58.303849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.261 [2024-11-28 08:29:58.303856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.261 [2024-11-28 08:29:58.303870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.261 qpair failed and we were unable to recover it. 00:31:01.261 [2024-11-28 08:29:58.313970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.261 [2024-11-28 08:29:58.314022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.261 [2024-11-28 08:29:58.314036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.261 [2024-11-28 08:29:58.314043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.261 [2024-11-28 08:29:58.314050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.261 [2024-11-28 08:29:58.314064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.261 qpair failed and we were unable to recover it. 00:31:01.261 [2024-11-28 08:29:58.324005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.261 [2024-11-28 08:29:58.324057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.261 [2024-11-28 08:29:58.324071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.261 [2024-11-28 08:29:58.324078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.261 [2024-11-28 08:29:58.324084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.261 [2024-11-28 08:29:58.324098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.261 qpair failed and we were unable to recover it. 00:31:01.261 [2024-11-28 08:29:58.333981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.261 [2024-11-28 08:29:58.334027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.261 [2024-11-28 08:29:58.334041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.261 [2024-11-28 08:29:58.334048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.261 [2024-11-28 08:29:58.334054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.261 [2024-11-28 08:29:58.334069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.261 qpair failed and we were unable to recover it. 00:31:01.261 [2024-11-28 08:29:58.344026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.261 [2024-11-28 08:29:58.344076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.261 [2024-11-28 08:29:58.344090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.261 [2024-11-28 08:29:58.344097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.261 [2024-11-28 08:29:58.344104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.261 [2024-11-28 08:29:58.344118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.261 qpair failed and we were unable to recover it. 00:31:01.261 [2024-11-28 08:29:58.354088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.261 [2024-11-28 08:29:58.354139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.261 [2024-11-28 08:29:58.354152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.261 [2024-11-28 08:29:58.354164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.261 [2024-11-28 08:29:58.354171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.261 [2024-11-28 08:29:58.354185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.261 qpair failed and we were unable to recover it. 00:31:01.261 [2024-11-28 08:29:58.364059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.261 [2024-11-28 08:29:58.364106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.261 [2024-11-28 08:29:58.364123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.261 [2024-11-28 08:29:58.364130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.261 [2024-11-28 08:29:58.364137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.261 [2024-11-28 08:29:58.364151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.261 qpair failed and we were unable to recover it. 00:31:01.261 [2024-11-28 08:29:58.374093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.261 [2024-11-28 08:29:58.374141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.261 [2024-11-28 08:29:58.374154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.261 [2024-11-28 08:29:58.374164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.261 [2024-11-28 08:29:58.374171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.261 [2024-11-28 08:29:58.374185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.261 qpair failed and we were unable to recover it. 00:31:01.261 [2024-11-28 08:29:58.384167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.261 [2024-11-28 08:29:58.384255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.261 [2024-11-28 08:29:58.384269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.262 [2024-11-28 08:29:58.384275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.262 [2024-11-28 08:29:58.384282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.262 [2024-11-28 08:29:58.384296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.262 qpair failed and we were unable to recover it. 00:31:01.262 [2024-11-28 08:29:58.394178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.262 [2024-11-28 08:29:58.394230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.262 [2024-11-28 08:29:58.394244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.262 [2024-11-28 08:29:58.394251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.262 [2024-11-28 08:29:58.394257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.262 [2024-11-28 08:29:58.394271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.262 qpair failed and we were unable to recover it. 00:31:01.262 [2024-11-28 08:29:58.404198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.262 [2024-11-28 08:29:58.404250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.262 [2024-11-28 08:29:58.404263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.262 [2024-11-28 08:29:58.404273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.262 [2024-11-28 08:29:58.404280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.262 [2024-11-28 08:29:58.404294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.262 qpair failed and we were unable to recover it. 00:31:01.262 [2024-11-28 08:29:58.414190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.262 [2024-11-28 08:29:58.414279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.262 [2024-11-28 08:29:58.414292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.262 [2024-11-28 08:29:58.414299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.262 [2024-11-28 08:29:58.414305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.262 [2024-11-28 08:29:58.414319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.262 qpair failed and we were unable to recover it. 00:31:01.262 [2024-11-28 08:29:58.424191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.262 [2024-11-28 08:29:58.424240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.262 [2024-11-28 08:29:58.424253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.262 [2024-11-28 08:29:58.424260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.262 [2024-11-28 08:29:58.424266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.262 [2024-11-28 08:29:58.424280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.262 qpair failed and we were unable to recover it. 00:31:01.262 [2024-11-28 08:29:58.434264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.262 [2024-11-28 08:29:58.434315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.262 [2024-11-28 08:29:58.434328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.262 [2024-11-28 08:29:58.434335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.262 [2024-11-28 08:29:58.434341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.262 [2024-11-28 08:29:58.434355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.262 qpair failed and we were unable to recover it. 00:31:01.262 [2024-11-28 08:29:58.444264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.262 [2024-11-28 08:29:58.444333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.262 [2024-11-28 08:29:58.444346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.262 [2024-11-28 08:29:58.444353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.262 [2024-11-28 08:29:58.444359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.262 [2024-11-28 08:29:58.444377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.262 qpair failed and we were unable to recover it. 00:31:01.262 [2024-11-28 08:29:58.454279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.262 [2024-11-28 08:29:58.454326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.262 [2024-11-28 08:29:58.454339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.262 [2024-11-28 08:29:58.454345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.262 [2024-11-28 08:29:58.454352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.262 [2024-11-28 08:29:58.454365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.262 qpair failed and we were unable to recover it. 00:31:01.262 [2024-11-28 08:29:58.464303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.262 [2024-11-28 08:29:58.464348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.262 [2024-11-28 08:29:58.464361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.262 [2024-11-28 08:29:58.464368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.262 [2024-11-28 08:29:58.464374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.262 [2024-11-28 08:29:58.464388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.262 qpair failed and we were unable to recover it. 00:31:01.262 [2024-11-28 08:29:58.474397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.262 [2024-11-28 08:29:58.474448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.262 [2024-11-28 08:29:58.474461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.262 [2024-11-28 08:29:58.474468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.262 [2024-11-28 08:29:58.474474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.262 [2024-11-28 08:29:58.474488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.262 qpair failed and we were unable to recover it. 00:31:01.262 [2024-11-28 08:29:58.484391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.262 [2024-11-28 08:29:58.484453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.262 [2024-11-28 08:29:58.484466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.262 [2024-11-28 08:29:58.484473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.262 [2024-11-28 08:29:58.484479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.262 [2024-11-28 08:29:58.484493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.262 qpair failed and we were unable to recover it. 00:31:01.262 [2024-11-28 08:29:58.494452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.262 [2024-11-28 08:29:58.494507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.262 [2024-11-28 08:29:58.494520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.262 [2024-11-28 08:29:58.494527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.262 [2024-11-28 08:29:58.494533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.262 [2024-11-28 08:29:58.494547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.262 qpair failed and we were unable to recover it. 00:31:01.262 [2024-11-28 08:29:58.504447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.262 [2024-11-28 08:29:58.504498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.262 [2024-11-28 08:29:58.504511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.262 [2024-11-28 08:29:58.504518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.262 [2024-11-28 08:29:58.504524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.262 [2024-11-28 08:29:58.504537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.262 qpair failed and we were unable to recover it. 00:31:01.262 [2024-11-28 08:29:58.514503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.262 [2024-11-28 08:29:58.514597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.262 [2024-11-28 08:29:58.514610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.262 [2024-11-28 08:29:58.514617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.263 [2024-11-28 08:29:58.514623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.263 [2024-11-28 08:29:58.514637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.263 qpair failed and we were unable to recover it. 00:31:01.263 [2024-11-28 08:29:58.524522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.263 [2024-11-28 08:29:58.524637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.263 [2024-11-28 08:29:58.524650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.263 [2024-11-28 08:29:58.524657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.263 [2024-11-28 08:29:58.524664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.263 [2024-11-28 08:29:58.524677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.263 qpair failed and we were unable to recover it. 00:31:01.263 [2024-11-28 08:29:58.534550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.263 [2024-11-28 08:29:58.534644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.263 [2024-11-28 08:29:58.534657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.263 [2024-11-28 08:29:58.534667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.263 [2024-11-28 08:29:58.534673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.263 [2024-11-28 08:29:58.534687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.263 qpair failed and we were unable to recover it. 00:31:01.263 [2024-11-28 08:29:58.544521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.263 [2024-11-28 08:29:58.544569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.263 [2024-11-28 08:29:58.544582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.263 [2024-11-28 08:29:58.544589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.263 [2024-11-28 08:29:58.544595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.263 [2024-11-28 08:29:58.544609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.263 qpair failed and we were unable to recover it. 00:31:01.525 [2024-11-28 08:29:58.554625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.526 [2024-11-28 08:29:58.554678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.526 [2024-11-28 08:29:58.554691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.526 [2024-11-28 08:29:58.554698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.526 [2024-11-28 08:29:58.554704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.526 [2024-11-28 08:29:58.554719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.526 qpair failed and we were unable to recover it. 00:31:01.526 [2024-11-28 08:29:58.564684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.526 [2024-11-28 08:29:58.564735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.526 [2024-11-28 08:29:58.564748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.526 [2024-11-28 08:29:58.564755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.526 [2024-11-28 08:29:58.564761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.526 [2024-11-28 08:29:58.564775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.526 qpair failed and we were unable to recover it. 00:31:01.526 [2024-11-28 08:29:58.574628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.526 [2024-11-28 08:29:58.574687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.526 [2024-11-28 08:29:58.574700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.526 [2024-11-28 08:29:58.574707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.526 [2024-11-28 08:29:58.574713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.526 [2024-11-28 08:29:58.574735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.526 qpair failed and we were unable to recover it. 00:31:01.526 [2024-11-28 08:29:58.584669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.526 [2024-11-28 08:29:58.584712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.526 [2024-11-28 08:29:58.584726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.526 [2024-11-28 08:29:58.584733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.526 [2024-11-28 08:29:58.584739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.526 [2024-11-28 08:29:58.584754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.526 qpair failed and we were unable to recover it. 00:31:01.526 [2024-11-28 08:29:58.594764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.526 [2024-11-28 08:29:58.594820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.526 [2024-11-28 08:29:58.594833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.526 [2024-11-28 08:29:58.594840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.526 [2024-11-28 08:29:58.594846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.526 [2024-11-28 08:29:58.594860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.526 qpair failed and we were unable to recover it. 00:31:01.526 [2024-11-28 08:29:58.604737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.526 [2024-11-28 08:29:58.604787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.526 [2024-11-28 08:29:58.604800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.526 [2024-11-28 08:29:58.604807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.526 [2024-11-28 08:29:58.604813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.526 [2024-11-28 08:29:58.604827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.526 qpair failed and we were unable to recover it. 00:31:01.526 [2024-11-28 08:29:58.614725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.526 [2024-11-28 08:29:58.614769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.526 [2024-11-28 08:29:58.614782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.526 [2024-11-28 08:29:58.614789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.526 [2024-11-28 08:29:58.614795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.526 [2024-11-28 08:29:58.614810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.526 qpair failed and we were unable to recover it. 00:31:01.526 [2024-11-28 08:29:58.624735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.526 [2024-11-28 08:29:58.624780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.526 [2024-11-28 08:29:58.624793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.526 [2024-11-28 08:29:58.624801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.526 [2024-11-28 08:29:58.624807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.526 [2024-11-28 08:29:58.624821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.526 qpair failed and we were unable to recover it. 00:31:01.526 [2024-11-28 08:29:58.634787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.526 [2024-11-28 08:29:58.634838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.526 [2024-11-28 08:29:58.634853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.526 [2024-11-28 08:29:58.634860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.526 [2024-11-28 08:29:58.634867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.526 [2024-11-28 08:29:58.634885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.526 qpair failed and we were unable to recover it. 00:31:01.526 [2024-11-28 08:29:58.644783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.526 [2024-11-28 08:29:58.644839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.526 [2024-11-28 08:29:58.644864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.526 [2024-11-28 08:29:58.644872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.526 [2024-11-28 08:29:58.644879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.526 [2024-11-28 08:29:58.644899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.526 qpair failed and we were unable to recover it. 00:31:01.526 [2024-11-28 08:29:58.654808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.526 [2024-11-28 08:29:58.654860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.526 [2024-11-28 08:29:58.654885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.526 [2024-11-28 08:29:58.654893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.526 [2024-11-28 08:29:58.654900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.526 [2024-11-28 08:29:58.654920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.526 qpair failed and we were unable to recover it. 00:31:01.526 [2024-11-28 08:29:58.664855] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.526 [2024-11-28 08:29:58.664901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.526 [2024-11-28 08:29:58.664920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.526 [2024-11-28 08:29:58.664928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.526 [2024-11-28 08:29:58.664934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.526 [2024-11-28 08:29:58.664950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.526 qpair failed and we were unable to recover it. 00:31:01.526 [2024-11-28 08:29:58.674899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.526 [2024-11-28 08:29:58.674951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.526 [2024-11-28 08:29:58.674975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.526 [2024-11-28 08:29:58.674984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.526 [2024-11-28 08:29:58.674991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.526 [2024-11-28 08:29:58.675011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.527 qpair failed and we were unable to recover it. 00:31:01.527 [2024-11-28 08:29:58.684923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.527 [2024-11-28 08:29:58.684978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.527 [2024-11-28 08:29:58.685002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.527 [2024-11-28 08:29:58.685011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.527 [2024-11-28 08:29:58.685019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.527 [2024-11-28 08:29:58.685038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.527 qpair failed and we were unable to recover it. 00:31:01.527 [2024-11-28 08:29:58.694941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.527 [2024-11-28 08:29:58.694987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.527 [2024-11-28 08:29:58.695003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.527 [2024-11-28 08:29:58.695010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.527 [2024-11-28 08:29:58.695016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.527 [2024-11-28 08:29:58.695032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.527 qpair failed and we were unable to recover it. 00:31:01.527 [2024-11-28 08:29:58.704964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.527 [2024-11-28 08:29:58.705019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.527 [2024-11-28 08:29:58.705033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.527 [2024-11-28 08:29:58.705040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.527 [2024-11-28 08:29:58.705050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.527 [2024-11-28 08:29:58.705066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.527 qpair failed and we were unable to recover it. 00:31:01.527 [2024-11-28 08:29:58.714988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.527 [2024-11-28 08:29:58.715037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.527 [2024-11-28 08:29:58.715050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.527 [2024-11-28 08:29:58.715057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.527 [2024-11-28 08:29:58.715063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.527 [2024-11-28 08:29:58.715077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.527 qpair failed and we were unable to recover it. 00:31:01.527 [2024-11-28 08:29:58.725046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.527 [2024-11-28 08:29:58.725091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.527 [2024-11-28 08:29:58.725105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.527 [2024-11-28 08:29:58.725111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.527 [2024-11-28 08:29:58.725118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.527 [2024-11-28 08:29:58.725131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.527 qpair failed and we were unable to recover it. 00:31:01.527 [2024-11-28 08:29:58.735018] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.527 [2024-11-28 08:29:58.735063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.527 [2024-11-28 08:29:58.735077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.527 [2024-11-28 08:29:58.735084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.527 [2024-11-28 08:29:58.735090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.527 [2024-11-28 08:29:58.735105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.527 qpair failed and we were unable to recover it. 00:31:01.527 [2024-11-28 08:29:58.745065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.527 [2024-11-28 08:29:58.745129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.527 [2024-11-28 08:29:58.745142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.527 [2024-11-28 08:29:58.745149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.527 [2024-11-28 08:29:58.745155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.527 [2024-11-28 08:29:58.745175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.527 qpair failed and we were unable to recover it. 00:31:01.527 [2024-11-28 08:29:58.755116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.527 [2024-11-28 08:29:58.755172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.527 [2024-11-28 08:29:58.755186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.527 [2024-11-28 08:29:58.755193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.527 [2024-11-28 08:29:58.755199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.527 [2024-11-28 08:29:58.755214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.527 qpair failed and we were unable to recover it. 00:31:01.527 [2024-11-28 08:29:58.765153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.527 [2024-11-28 08:29:58.765203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.527 [2024-11-28 08:29:58.765216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.527 [2024-11-28 08:29:58.765224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.527 [2024-11-28 08:29:58.765230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.527 [2024-11-28 08:29:58.765244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.527 qpair failed and we were unable to recover it. 00:31:01.527 [2024-11-28 08:29:58.775156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.527 [2024-11-28 08:29:58.775212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.527 [2024-11-28 08:29:58.775225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.527 [2024-11-28 08:29:58.775232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.527 [2024-11-28 08:29:58.775239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.527 [2024-11-28 08:29:58.775253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.527 qpair failed and we were unable to recover it. 00:31:01.527 [2024-11-28 08:29:58.785193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.527 [2024-11-28 08:29:58.785276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.527 [2024-11-28 08:29:58.785289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.527 [2024-11-28 08:29:58.785296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.527 [2024-11-28 08:29:58.785302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.527 [2024-11-28 08:29:58.785316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.527 qpair failed and we were unable to recover it. 00:31:01.527 [2024-11-28 08:29:58.795192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.527 [2024-11-28 08:29:58.795238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.527 [2024-11-28 08:29:58.795254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.527 [2024-11-28 08:29:58.795261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.527 [2024-11-28 08:29:58.795267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.527 [2024-11-28 08:29:58.795282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.527 qpair failed and we were unable to recover it. 00:31:01.527 [2024-11-28 08:29:58.805259] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.527 [2024-11-28 08:29:58.805313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.527 [2024-11-28 08:29:58.805326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.527 [2024-11-28 08:29:58.805333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.527 [2024-11-28 08:29:58.805340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.527 [2024-11-28 08:29:58.805353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.528 qpair failed and we were unable to recover it. 00:31:01.791 [2024-11-28 08:29:58.815265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.791 [2024-11-28 08:29:58.815316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.791 [2024-11-28 08:29:58.815329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.791 [2024-11-28 08:29:58.815336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.791 [2024-11-28 08:29:58.815342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.791 [2024-11-28 08:29:58.815356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.791 qpair failed and we were unable to recover it. 00:31:01.791 [2024-11-28 08:29:58.825265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.791 [2024-11-28 08:29:58.825315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.791 [2024-11-28 08:29:58.825327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.791 [2024-11-28 08:29:58.825335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.791 [2024-11-28 08:29:58.825341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.791 [2024-11-28 08:29:58.825355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.791 qpair failed and we were unable to recover it. 00:31:01.791 [2024-11-28 08:29:58.835336] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.791 [2024-11-28 08:29:58.835381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.791 [2024-11-28 08:29:58.835394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.791 [2024-11-28 08:29:58.835401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.791 [2024-11-28 08:29:58.835411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.791 [2024-11-28 08:29:58.835425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.791 qpair failed and we were unable to recover it. 00:31:01.791 [2024-11-28 08:29:58.845377] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.791 [2024-11-28 08:29:58.845477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.791 [2024-11-28 08:29:58.845490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.791 [2024-11-28 08:29:58.845497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.791 [2024-11-28 08:29:58.845503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.791 [2024-11-28 08:29:58.845517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.791 qpair failed and we were unable to recover it. 00:31:01.791 [2024-11-28 08:29:58.855345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.791 [2024-11-28 08:29:58.855385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.792 [2024-11-28 08:29:58.855399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.792 [2024-11-28 08:29:58.855406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.792 [2024-11-28 08:29:58.855412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.792 [2024-11-28 08:29:58.855426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.792 qpair failed and we were unable to recover it. 00:31:01.792 [2024-11-28 08:29:58.865405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.792 [2024-11-28 08:29:58.865448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.792 [2024-11-28 08:29:58.865461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.792 [2024-11-28 08:29:58.865468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.792 [2024-11-28 08:29:58.865474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.792 [2024-11-28 08:29:58.865488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.792 qpair failed and we were unable to recover it. 00:31:01.792 [2024-11-28 08:29:58.875425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.792 [2024-11-28 08:29:58.875484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.792 [2024-11-28 08:29:58.875498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.792 [2024-11-28 08:29:58.875504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.792 [2024-11-28 08:29:58.875511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.792 [2024-11-28 08:29:58.875524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.792 qpair failed and we were unable to recover it. 00:31:01.792 [2024-11-28 08:29:58.885464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.792 [2024-11-28 08:29:58.885512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.792 [2024-11-28 08:29:58.885526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.792 [2024-11-28 08:29:58.885532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.792 [2024-11-28 08:29:58.885539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.792 [2024-11-28 08:29:58.885553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.792 qpair failed and we were unable to recover it. 00:31:01.792 [2024-11-28 08:29:58.895485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.792 [2024-11-28 08:29:58.895527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.792 [2024-11-28 08:29:58.895540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.792 [2024-11-28 08:29:58.895547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.792 [2024-11-28 08:29:58.895553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.792 [2024-11-28 08:29:58.895567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.792 qpair failed and we were unable to recover it. 00:31:01.792 [2024-11-28 08:29:58.905497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.792 [2024-11-28 08:29:58.905548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.792 [2024-11-28 08:29:58.905562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.792 [2024-11-28 08:29:58.905569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.792 [2024-11-28 08:29:58.905575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.792 [2024-11-28 08:29:58.905589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.792 qpair failed and we were unable to recover it. 00:31:01.792 [2024-11-28 08:29:58.915519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.792 [2024-11-28 08:29:58.915566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.792 [2024-11-28 08:29:58.915580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.792 [2024-11-28 08:29:58.915587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.792 [2024-11-28 08:29:58.915593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.792 [2024-11-28 08:29:58.915608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.792 qpair failed and we were unable to recover it. 00:31:01.792 [2024-11-28 08:29:58.925628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.792 [2024-11-28 08:29:58.925676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.792 [2024-11-28 08:29:58.925692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.792 [2024-11-28 08:29:58.925700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.792 [2024-11-28 08:29:58.925706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.792 [2024-11-28 08:29:58.925720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.792 qpair failed and we were unable to recover it. 00:31:01.792 [2024-11-28 08:29:58.935578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.792 [2024-11-28 08:29:58.935622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.792 [2024-11-28 08:29:58.935635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.792 [2024-11-28 08:29:58.935643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.792 [2024-11-28 08:29:58.935649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.792 [2024-11-28 08:29:58.935663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.792 qpair failed and we were unable to recover it. 00:31:01.792 [2024-11-28 08:29:58.945601] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.792 [2024-11-28 08:29:58.945647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.792 [2024-11-28 08:29:58.945660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.792 [2024-11-28 08:29:58.945667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.792 [2024-11-28 08:29:58.945674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.792 [2024-11-28 08:29:58.945688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.792 qpair failed and we were unable to recover it. 00:31:01.792 [2024-11-28 08:29:58.955662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.792 [2024-11-28 08:29:58.955714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.792 [2024-11-28 08:29:58.955728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.792 [2024-11-28 08:29:58.955734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.792 [2024-11-28 08:29:58.955741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.792 [2024-11-28 08:29:58.955754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.792 qpair failed and we were unable to recover it. 00:31:01.792 [2024-11-28 08:29:58.965655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.792 [2024-11-28 08:29:58.965742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.792 [2024-11-28 08:29:58.965755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.792 [2024-11-28 08:29:58.965765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.792 [2024-11-28 08:29:58.965772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.792 [2024-11-28 08:29:58.965786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.792 qpair failed and we were unable to recover it. 00:31:01.792 [2024-11-28 08:29:58.975683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.792 [2024-11-28 08:29:58.975733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.792 [2024-11-28 08:29:58.975746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.792 [2024-11-28 08:29:58.975753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.792 [2024-11-28 08:29:58.975759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.792 [2024-11-28 08:29:58.975773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.792 qpair failed and we were unable to recover it. 00:31:01.792 [2024-11-28 08:29:58.985757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.792 [2024-11-28 08:29:58.985804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.793 [2024-11-28 08:29:58.985817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.793 [2024-11-28 08:29:58.985824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.793 [2024-11-28 08:29:58.985830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.793 [2024-11-28 08:29:58.985844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.793 qpair failed and we were unable to recover it. 00:31:01.793 [2024-11-28 08:29:58.995737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.793 [2024-11-28 08:29:58.995780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.793 [2024-11-28 08:29:58.995793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.793 [2024-11-28 08:29:58.995800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.793 [2024-11-28 08:29:58.995806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.793 [2024-11-28 08:29:58.995820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.793 qpair failed and we were unable to recover it. 00:31:01.793 [2024-11-28 08:29:59.005775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.793 [2024-11-28 08:29:59.005828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.793 [2024-11-28 08:29:59.005841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.793 [2024-11-28 08:29:59.005848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.793 [2024-11-28 08:29:59.005854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.793 [2024-11-28 08:29:59.005872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.793 qpair failed and we were unable to recover it. 00:31:01.793 [2024-11-28 08:29:59.015748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.793 [2024-11-28 08:29:59.015801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.793 [2024-11-28 08:29:59.015826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.793 [2024-11-28 08:29:59.015835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.793 [2024-11-28 08:29:59.015841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.793 [2024-11-28 08:29:59.015861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.793 qpair failed and we were unable to recover it. 00:31:01.793 [2024-11-28 08:29:59.025815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.793 [2024-11-28 08:29:59.025868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.793 [2024-11-28 08:29:59.025894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.793 [2024-11-28 08:29:59.025902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.793 [2024-11-28 08:29:59.025909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.793 [2024-11-28 08:29:59.025929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.793 qpair failed and we were unable to recover it. 00:31:01.793 [2024-11-28 08:29:59.035817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.793 [2024-11-28 08:29:59.035867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.793 [2024-11-28 08:29:59.035883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.793 [2024-11-28 08:29:59.035890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.793 [2024-11-28 08:29:59.035896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.793 [2024-11-28 08:29:59.035911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.793 qpair failed and we were unable to recover it. 00:31:01.793 [2024-11-28 08:29:59.045853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.793 [2024-11-28 08:29:59.045907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.793 [2024-11-28 08:29:59.045932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.793 [2024-11-28 08:29:59.045940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.793 [2024-11-28 08:29:59.045948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.793 [2024-11-28 08:29:59.045967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.793 qpair failed and we were unable to recover it. 00:31:01.793 [2024-11-28 08:29:59.055895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.793 [2024-11-28 08:29:59.055951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.793 [2024-11-28 08:29:59.055977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.793 [2024-11-28 08:29:59.055986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.793 [2024-11-28 08:29:59.055993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.793 [2024-11-28 08:29:59.056012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.793 qpair failed and we were unable to recover it. 00:31:01.793 [2024-11-28 08:29:59.065940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.793 [2024-11-28 08:29:59.066033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.793 [2024-11-28 08:29:59.066058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.793 [2024-11-28 08:29:59.066067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.793 [2024-11-28 08:29:59.066074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.793 [2024-11-28 08:29:59.066094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.793 qpair failed and we were unable to recover it. 00:31:01.793 [2024-11-28 08:29:59.075954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.793 [2024-11-28 08:29:59.076002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.793 [2024-11-28 08:29:59.076018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.793 [2024-11-28 08:29:59.076026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.793 [2024-11-28 08:29:59.076032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:01.793 [2024-11-28 08:29:59.076052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:01.793 qpair failed and we were unable to recover it. 00:31:02.056 [2024-11-28 08:29:59.085989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.056 [2024-11-28 08:29:59.086033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.056 [2024-11-28 08:29:59.086048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.056 [2024-11-28 08:29:59.086056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.056 [2024-11-28 08:29:59.086062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.057 [2024-11-28 08:29:59.086077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.057 qpair failed and we were unable to recover it. 00:31:02.057 [2024-11-28 08:29:59.095874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.057 [2024-11-28 08:29:59.095960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.057 [2024-11-28 08:29:59.095974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.057 [2024-11-28 08:29:59.095986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.057 [2024-11-28 08:29:59.095992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.057 [2024-11-28 08:29:59.096007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.057 qpair failed and we were unable to recover it. 00:31:02.057 [2024-11-28 08:29:59.105999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.057 [2024-11-28 08:29:59.106050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.057 [2024-11-28 08:29:59.106064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.057 [2024-11-28 08:29:59.106071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.057 [2024-11-28 08:29:59.106077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.057 [2024-11-28 08:29:59.106091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.057 qpair failed and we were unable to recover it. 00:31:02.057 [2024-11-28 08:29:59.116078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.057 [2024-11-28 08:29:59.116122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.057 [2024-11-28 08:29:59.116135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.057 [2024-11-28 08:29:59.116142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.057 [2024-11-28 08:29:59.116149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.057 [2024-11-28 08:29:59.116167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.057 qpair failed and we were unable to recover it. 00:31:02.057 [2024-11-28 08:29:59.126102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.057 [2024-11-28 08:29:59.126193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.057 [2024-11-28 08:29:59.126206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.057 [2024-11-28 08:29:59.126213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.057 [2024-11-28 08:29:59.126220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.057 [2024-11-28 08:29:59.126234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.057 qpair failed and we were unable to recover it. 00:31:02.057 [2024-11-28 08:29:59.136014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.057 [2024-11-28 08:29:59.136060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.057 [2024-11-28 08:29:59.136073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.057 [2024-11-28 08:29:59.136080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.057 [2024-11-28 08:29:59.136087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.057 [2024-11-28 08:29:59.136104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.057 qpair failed and we were unable to recover it. 00:31:02.057 [2024-11-28 08:29:59.146109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.057 [2024-11-28 08:29:59.146154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.057 [2024-11-28 08:29:59.146171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.057 [2024-11-28 08:29:59.146178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.057 [2024-11-28 08:29:59.146185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.057 [2024-11-28 08:29:59.146199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.057 qpair failed and we were unable to recover it. 00:31:02.057 [2024-11-28 08:29:59.156174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.057 [2024-11-28 08:29:59.156232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.057 [2024-11-28 08:29:59.156245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.057 [2024-11-28 08:29:59.156252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.057 [2024-11-28 08:29:59.156258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.057 [2024-11-28 08:29:59.156273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.057 qpair failed and we were unable to recover it. 00:31:02.057 [2024-11-28 08:29:59.166212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.057 [2024-11-28 08:29:59.166294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.057 [2024-11-28 08:29:59.166307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.057 [2024-11-28 08:29:59.166315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.057 [2024-11-28 08:29:59.166321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.057 [2024-11-28 08:29:59.166336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.057 qpair failed and we were unable to recover it. 00:31:02.057 [2024-11-28 08:29:59.176232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.057 [2024-11-28 08:29:59.176274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.057 [2024-11-28 08:29:59.176287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.057 [2024-11-28 08:29:59.176294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.057 [2024-11-28 08:29:59.176300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.057 [2024-11-28 08:29:59.176315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.057 qpair failed and we were unable to recover it. 00:31:02.057 [2024-11-28 08:29:59.186263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.057 [2024-11-28 08:29:59.186307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.057 [2024-11-28 08:29:59.186320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.057 [2024-11-28 08:29:59.186327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.057 [2024-11-28 08:29:59.186334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.057 [2024-11-28 08:29:59.186348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.057 qpair failed and we were unable to recover it. 00:31:02.057 [2024-11-28 08:29:59.196285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.057 [2024-11-28 08:29:59.196333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.057 [2024-11-28 08:29:59.196346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.057 [2024-11-28 08:29:59.196353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.057 [2024-11-28 08:29:59.196359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.057 [2024-11-28 08:29:59.196373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.057 qpair failed and we were unable to recover it. 00:31:02.057 [2024-11-28 08:29:59.206309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.057 [2024-11-28 08:29:59.206354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.057 [2024-11-28 08:29:59.206368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.057 [2024-11-28 08:29:59.206375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.057 [2024-11-28 08:29:59.206381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.057 [2024-11-28 08:29:59.206395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.057 qpair failed and we were unable to recover it. 00:31:02.057 [2024-11-28 08:29:59.216337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.057 [2024-11-28 08:29:59.216378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.057 [2024-11-28 08:29:59.216391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.058 [2024-11-28 08:29:59.216398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.058 [2024-11-28 08:29:59.216404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.058 [2024-11-28 08:29:59.216418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.058 qpair failed and we were unable to recover it. 00:31:02.058 [2024-11-28 08:29:59.226360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.058 [2024-11-28 08:29:59.226432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.058 [2024-11-28 08:29:59.226449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.058 [2024-11-28 08:29:59.226456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.058 [2024-11-28 08:29:59.226462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.058 [2024-11-28 08:29:59.226477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.058 qpair failed and we were unable to recover it. 00:31:02.058 [2024-11-28 08:29:59.236391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.058 [2024-11-28 08:29:59.236435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.058 [2024-11-28 08:29:59.236449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.058 [2024-11-28 08:29:59.236456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.058 [2024-11-28 08:29:59.236462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.058 [2024-11-28 08:29:59.236476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.058 qpair failed and we were unable to recover it. 00:31:02.058 [2024-11-28 08:29:59.246285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.058 [2024-11-28 08:29:59.246342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.058 [2024-11-28 08:29:59.246355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.058 [2024-11-28 08:29:59.246362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.058 [2024-11-28 08:29:59.246368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.058 [2024-11-28 08:29:59.246382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.058 qpair failed and we were unable to recover it. 00:31:02.058 [2024-11-28 08:29:59.256450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.058 [2024-11-28 08:29:59.256495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.058 [2024-11-28 08:29:59.256508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.058 [2024-11-28 08:29:59.256515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.058 [2024-11-28 08:29:59.256521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.058 [2024-11-28 08:29:59.256535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.058 qpair failed and we were unable to recover it. 00:31:02.058 [2024-11-28 08:29:59.266473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.058 [2024-11-28 08:29:59.266528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.058 [2024-11-28 08:29:59.266541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.058 [2024-11-28 08:29:59.266548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.058 [2024-11-28 08:29:59.266561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.058 [2024-11-28 08:29:59.266576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.058 qpair failed and we were unable to recover it. 00:31:02.058 [2024-11-28 08:29:59.276476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.058 [2024-11-28 08:29:59.276519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.058 [2024-11-28 08:29:59.276532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.058 [2024-11-28 08:29:59.276539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.058 [2024-11-28 08:29:59.276546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.058 [2024-11-28 08:29:59.276560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.058 qpair failed and we were unable to recover it. 00:31:02.058 [2024-11-28 08:29:59.286533] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.058 [2024-11-28 08:29:59.286584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.058 [2024-11-28 08:29:59.286598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.058 [2024-11-28 08:29:59.286605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.058 [2024-11-28 08:29:59.286611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.058 [2024-11-28 08:29:59.286625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.058 qpair failed and we were unable to recover it. 00:31:02.058 [2024-11-28 08:29:59.296532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.058 [2024-11-28 08:29:59.296574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.058 [2024-11-28 08:29:59.296587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.058 [2024-11-28 08:29:59.296594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.058 [2024-11-28 08:29:59.296601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.058 [2024-11-28 08:29:59.296615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.058 qpair failed and we were unable to recover it. 00:31:02.058 [2024-11-28 08:29:59.306547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.058 [2024-11-28 08:29:59.306587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.058 [2024-11-28 08:29:59.306600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.058 [2024-11-28 08:29:59.306607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.058 [2024-11-28 08:29:59.306613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.058 [2024-11-28 08:29:59.306627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.058 qpair failed and we were unable to recover it. 00:31:02.058 [2024-11-28 08:29:59.316645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.058 [2024-11-28 08:29:59.316693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.058 [2024-11-28 08:29:59.316706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.058 [2024-11-28 08:29:59.316712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.058 [2024-11-28 08:29:59.316719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.058 [2024-11-28 08:29:59.316732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.058 qpair failed and we were unable to recover it. 00:31:02.058 [2024-11-28 08:29:59.326631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.058 [2024-11-28 08:29:59.326682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.058 [2024-11-28 08:29:59.326695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.058 [2024-11-28 08:29:59.326702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.058 [2024-11-28 08:29:59.326708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.058 [2024-11-28 08:29:59.326722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.058 qpair failed and we were unable to recover it. 00:31:02.058 [2024-11-28 08:29:59.336644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.058 [2024-11-28 08:29:59.336683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.058 [2024-11-28 08:29:59.336696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.058 [2024-11-28 08:29:59.336703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.058 [2024-11-28 08:29:59.336709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.058 [2024-11-28 08:29:59.336724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.058 qpair failed and we were unable to recover it. 00:31:02.322 [2024-11-28 08:29:59.346671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.322 [2024-11-28 08:29:59.346716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.322 [2024-11-28 08:29:59.346729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.322 [2024-11-28 08:29:59.346736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.322 [2024-11-28 08:29:59.346743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.322 [2024-11-28 08:29:59.346756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.322 qpair failed and we were unable to recover it. 00:31:02.322 [2024-11-28 08:29:59.356585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.322 [2024-11-28 08:29:59.356632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.322 [2024-11-28 08:29:59.356650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.322 [2024-11-28 08:29:59.356657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.322 [2024-11-28 08:29:59.356664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.322 [2024-11-28 08:29:59.356679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.322 qpair failed and we were unable to recover it. 00:31:02.322 [2024-11-28 08:29:59.366724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.322 [2024-11-28 08:29:59.366772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.322 [2024-11-28 08:29:59.366786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.322 [2024-11-28 08:29:59.366793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.322 [2024-11-28 08:29:59.366799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.322 [2024-11-28 08:29:59.366813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.322 qpair failed and we were unable to recover it. 00:31:02.322 [2024-11-28 08:29:59.376717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.322 [2024-11-28 08:29:59.376758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.322 [2024-11-28 08:29:59.376772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.322 [2024-11-28 08:29:59.376779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.322 [2024-11-28 08:29:59.376785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.322 [2024-11-28 08:29:59.376799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.322 qpair failed and we were unable to recover it. 00:31:02.322 [2024-11-28 08:29:59.386716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.322 [2024-11-28 08:29:59.386825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.322 [2024-11-28 08:29:59.386838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.322 [2024-11-28 08:29:59.386845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.322 [2024-11-28 08:29:59.386852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.322 [2024-11-28 08:29:59.386865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.322 qpair failed and we were unable to recover it. 00:31:02.322 [2024-11-28 08:29:59.396798] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.322 [2024-11-28 08:29:59.396840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.322 [2024-11-28 08:29:59.396853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.322 [2024-11-28 08:29:59.396860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.322 [2024-11-28 08:29:59.396869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.322 [2024-11-28 08:29:59.396884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.322 qpair failed and we were unable to recover it. 00:31:02.322 [2024-11-28 08:29:59.406822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.322 [2024-11-28 08:29:59.406874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.322 [2024-11-28 08:29:59.406899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.322 [2024-11-28 08:29:59.406908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.322 [2024-11-28 08:29:59.406914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.322 [2024-11-28 08:29:59.406934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.322 qpair failed and we were unable to recover it. 00:31:02.322 [2024-11-28 08:29:59.416845] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.322 [2024-11-28 08:29:59.416936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.322 [2024-11-28 08:29:59.416951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.322 [2024-11-28 08:29:59.416958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.322 [2024-11-28 08:29:59.416965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.322 [2024-11-28 08:29:59.416980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.322 qpair failed and we were unable to recover it. 00:31:02.322 [2024-11-28 08:29:59.426846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.322 [2024-11-28 08:29:59.426899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.322 [2024-11-28 08:29:59.426925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.322 [2024-11-28 08:29:59.426934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.322 [2024-11-28 08:29:59.426940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.322 [2024-11-28 08:29:59.426960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.323 qpair failed and we were unable to recover it. 00:31:02.323 [2024-11-28 08:29:59.436925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.323 [2024-11-28 08:29:59.436985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.323 [2024-11-28 08:29:59.437000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.323 [2024-11-28 08:29:59.437007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.323 [2024-11-28 08:29:59.437014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.323 [2024-11-28 08:29:59.437029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.323 qpair failed and we were unable to recover it. 00:31:02.323 [2024-11-28 08:29:59.446924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.323 [2024-11-28 08:29:59.446976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.323 [2024-11-28 08:29:59.447001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.323 [2024-11-28 08:29:59.447010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.323 [2024-11-28 08:29:59.447017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.323 [2024-11-28 08:29:59.447037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.323 qpair failed and we were unable to recover it. 00:31:02.323 [2024-11-28 08:29:59.456961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.323 [2024-11-28 08:29:59.457013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.323 [2024-11-28 08:29:59.457028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.323 [2024-11-28 08:29:59.457035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.323 [2024-11-28 08:29:59.457042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.323 [2024-11-28 08:29:59.457057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.323 qpair failed and we were unable to recover it. 00:31:02.323 [2024-11-28 08:29:59.466995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.323 [2024-11-28 08:29:59.467043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.323 [2024-11-28 08:29:59.467057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.323 [2024-11-28 08:29:59.467064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.323 [2024-11-28 08:29:59.467070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.323 [2024-11-28 08:29:59.467084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.323 qpair failed and we were unable to recover it. 00:31:02.323 [2024-11-28 08:29:59.477031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.323 [2024-11-28 08:29:59.477076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.323 [2024-11-28 08:29:59.477090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.323 [2024-11-28 08:29:59.477097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.323 [2024-11-28 08:29:59.477103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.323 [2024-11-28 08:29:59.477117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.323 qpair failed and we were unable to recover it. 00:31:02.323 [2024-11-28 08:29:59.487063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.323 [2024-11-28 08:29:59.487132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.323 [2024-11-28 08:29:59.487150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.323 [2024-11-28 08:29:59.487157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.323 [2024-11-28 08:29:59.487168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.323 [2024-11-28 08:29:59.487183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.323 qpair failed and we were unable to recover it. 00:31:02.323 [2024-11-28 08:29:59.497055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.323 [2024-11-28 08:29:59.497095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.323 [2024-11-28 08:29:59.497109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.323 [2024-11-28 08:29:59.497116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.323 [2024-11-28 08:29:59.497122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.323 [2024-11-28 08:29:59.497136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.323 qpair failed and we were unable to recover it. 00:31:02.323 [2024-11-28 08:29:59.507095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.323 [2024-11-28 08:29:59.507138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.323 [2024-11-28 08:29:59.507152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.323 [2024-11-28 08:29:59.507162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.323 [2024-11-28 08:29:59.507169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.323 [2024-11-28 08:29:59.507183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.323 qpair failed and we were unable to recover it. 00:31:02.323 [2024-11-28 08:29:59.517130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.323 [2024-11-28 08:29:59.517200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.323 [2024-11-28 08:29:59.517213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.323 [2024-11-28 08:29:59.517220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.323 [2024-11-28 08:29:59.517226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.323 [2024-11-28 08:29:59.517240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.323 qpair failed and we were unable to recover it. 00:31:02.323 [2024-11-28 08:29:59.527165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.323 [2024-11-28 08:29:59.527212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.323 [2024-11-28 08:29:59.527225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.323 [2024-11-28 08:29:59.527235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.323 [2024-11-28 08:29:59.527242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.323 [2024-11-28 08:29:59.527256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.323 qpair failed and we were unable to recover it. 00:31:02.323 [2024-11-28 08:29:59.537182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.323 [2024-11-28 08:29:59.537237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.323 [2024-11-28 08:29:59.537250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.323 [2024-11-28 08:29:59.537256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.323 [2024-11-28 08:29:59.537263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.323 [2024-11-28 08:29:59.537277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.323 qpair failed and we were unable to recover it. 00:31:02.323 [2024-11-28 08:29:59.547195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.323 [2024-11-28 08:29:59.547237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.323 [2024-11-28 08:29:59.547250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.323 [2024-11-28 08:29:59.547257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.323 [2024-11-28 08:29:59.547263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.323 [2024-11-28 08:29:59.547278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.323 qpair failed and we were unable to recover it. 00:31:02.323 [2024-11-28 08:29:59.557215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.323 [2024-11-28 08:29:59.557264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.323 [2024-11-28 08:29:59.557277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.323 [2024-11-28 08:29:59.557284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.323 [2024-11-28 08:29:59.557291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.323 [2024-11-28 08:29:59.557305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.323 qpair failed and we were unable to recover it. 00:31:02.324 [2024-11-28 08:29:59.567238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.324 [2024-11-28 08:29:59.567285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.324 [2024-11-28 08:29:59.567299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.324 [2024-11-28 08:29:59.567306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.324 [2024-11-28 08:29:59.567312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.324 [2024-11-28 08:29:59.567330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.324 qpair failed and we were unable to recover it. 00:31:02.324 [2024-11-28 08:29:59.577273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.324 [2024-11-28 08:29:59.577315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.324 [2024-11-28 08:29:59.577328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.324 [2024-11-28 08:29:59.577335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.324 [2024-11-28 08:29:59.577341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.324 [2024-11-28 08:29:59.577356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.324 qpair failed and we were unable to recover it. 00:31:02.324 [2024-11-28 08:29:59.587343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.324 [2024-11-28 08:29:59.587407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.324 [2024-11-28 08:29:59.587421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.324 [2024-11-28 08:29:59.587428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.324 [2024-11-28 08:29:59.587435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.324 [2024-11-28 08:29:59.587448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.324 qpair failed and we were unable to recover it. 00:31:02.324 [2024-11-28 08:29:59.597241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.324 [2024-11-28 08:29:59.597289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.324 [2024-11-28 08:29:59.597302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.324 [2024-11-28 08:29:59.597309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.324 [2024-11-28 08:29:59.597315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.324 [2024-11-28 08:29:59.597329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.324 qpair failed and we were unable to recover it. 00:31:02.324 [2024-11-28 08:29:59.607375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.587 [2024-11-28 08:29:59.607427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.587 [2024-11-28 08:29:59.607441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.587 [2024-11-28 08:29:59.607448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.587 [2024-11-28 08:29:59.607456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.587 [2024-11-28 08:29:59.607471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.587 qpair failed and we were unable to recover it. 00:31:02.587 [2024-11-28 08:29:59.617364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.587 [2024-11-28 08:29:59.617411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.587 [2024-11-28 08:29:59.617424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.587 [2024-11-28 08:29:59.617431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.587 [2024-11-28 08:29:59.617437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.587 [2024-11-28 08:29:59.617451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.587 qpair failed and we were unable to recover it. 00:31:02.587 [2024-11-28 08:29:59.627429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.587 [2024-11-28 08:29:59.627473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.587 [2024-11-28 08:29:59.627486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.587 [2024-11-28 08:29:59.627493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.587 [2024-11-28 08:29:59.627499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.587 [2024-11-28 08:29:59.627513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.587 qpair failed and we were unable to recover it. 00:31:02.587 [2024-11-28 08:29:59.637435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.587 [2024-11-28 08:29:59.637479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.587 [2024-11-28 08:29:59.637492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.587 [2024-11-28 08:29:59.637499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.587 [2024-11-28 08:29:59.637505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.587 [2024-11-28 08:29:59.637519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.587 qpair failed and we were unable to recover it. 00:31:02.587 [2024-11-28 08:29:59.647491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.587 [2024-11-28 08:29:59.647543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.587 [2024-11-28 08:29:59.647563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.587 [2024-11-28 08:29:59.647571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.587 [2024-11-28 08:29:59.647577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.587 [2024-11-28 08:29:59.647596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.587 qpair failed and we were unable to recover it. 00:31:02.587 [2024-11-28 08:29:59.657496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.587 [2024-11-28 08:29:59.657536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.587 [2024-11-28 08:29:59.657550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.587 [2024-11-28 08:29:59.657560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.587 [2024-11-28 08:29:59.657567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.587 [2024-11-28 08:29:59.657581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.587 qpair failed and we were unable to recover it. 00:31:02.587 [2024-11-28 08:29:59.667526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.587 [2024-11-28 08:29:59.667583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.587 [2024-11-28 08:29:59.667597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.587 [2024-11-28 08:29:59.667604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.587 [2024-11-28 08:29:59.667611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.587 [2024-11-28 08:29:59.667625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.587 qpair failed and we were unable to recover it. 00:31:02.587 [2024-11-28 08:29:59.677563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.587 [2024-11-28 08:29:59.677611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.587 [2024-11-28 08:29:59.677624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.587 [2024-11-28 08:29:59.677630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.587 [2024-11-28 08:29:59.677637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.587 [2024-11-28 08:29:59.677651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.587 qpair failed and we were unable to recover it. 00:31:02.587 [2024-11-28 08:29:59.687600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.587 [2024-11-28 08:29:59.687652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.587 [2024-11-28 08:29:59.687666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.587 [2024-11-28 08:29:59.687675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.587 [2024-11-28 08:29:59.687682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.587 [2024-11-28 08:29:59.687696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.587 qpair failed and we were unable to recover it. 00:31:02.587 [2024-11-28 08:29:59.697603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.587 [2024-11-28 08:29:59.697651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.587 [2024-11-28 08:29:59.697664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.587 [2024-11-28 08:29:59.697671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.587 [2024-11-28 08:29:59.697678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.587 [2024-11-28 08:29:59.697695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.587 qpair failed and we were unable to recover it. 00:31:02.587 [2024-11-28 08:29:59.707635] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.587 [2024-11-28 08:29:59.707681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.587 [2024-11-28 08:29:59.707694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.587 [2024-11-28 08:29:59.707701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.587 [2024-11-28 08:29:59.707707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.587 [2024-11-28 08:29:59.707721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.587 qpair failed and we were unable to recover it. 00:31:02.587 [2024-11-28 08:29:59.717674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.587 [2024-11-28 08:29:59.717721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.587 [2024-11-28 08:29:59.717734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.587 [2024-11-28 08:29:59.717741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.587 [2024-11-28 08:29:59.717747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.587 [2024-11-28 08:29:59.717761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.587 qpair failed and we were unable to recover it. 00:31:02.587 [2024-11-28 08:29:59.727704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.587 [2024-11-28 08:29:59.727751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.587 [2024-11-28 08:29:59.727764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.588 [2024-11-28 08:29:59.727771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.588 [2024-11-28 08:29:59.727777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.588 [2024-11-28 08:29:59.727791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.588 qpair failed and we were unable to recover it. 00:31:02.588 [2024-11-28 08:29:59.737702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.588 [2024-11-28 08:29:59.737748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.588 [2024-11-28 08:29:59.737760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.588 [2024-11-28 08:29:59.737767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.588 [2024-11-28 08:29:59.737774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.588 [2024-11-28 08:29:59.737788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.588 qpair failed and we were unable to recover it. 00:31:02.588 [2024-11-28 08:29:59.747734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.588 [2024-11-28 08:29:59.747777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.588 [2024-11-28 08:29:59.747790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.588 [2024-11-28 08:29:59.747797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.588 [2024-11-28 08:29:59.747803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.588 [2024-11-28 08:29:59.747817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.588 qpair failed and we were unable to recover it. 00:31:02.588 [2024-11-28 08:29:59.757738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.588 [2024-11-28 08:29:59.757785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.588 [2024-11-28 08:29:59.757798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.588 [2024-11-28 08:29:59.757805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.588 [2024-11-28 08:29:59.757811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.588 [2024-11-28 08:29:59.757825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.588 qpair failed and we were unable to recover it. 00:31:02.588 [2024-11-28 08:29:59.767799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.588 [2024-11-28 08:29:59.767844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.588 [2024-11-28 08:29:59.767857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.588 [2024-11-28 08:29:59.767864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.588 [2024-11-28 08:29:59.767870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.588 [2024-11-28 08:29:59.767883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.588 qpair failed and we were unable to recover it. 00:31:02.588 [2024-11-28 08:29:59.777794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.588 [2024-11-28 08:29:59.777844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.588 [2024-11-28 08:29:59.777869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.588 [2024-11-28 08:29:59.777877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.588 [2024-11-28 08:29:59.777885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.588 [2024-11-28 08:29:59.777904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.588 qpair failed and we were unable to recover it. 00:31:02.588 [2024-11-28 08:29:59.787843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.588 [2024-11-28 08:29:59.787890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.588 [2024-11-28 08:29:59.787910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.588 [2024-11-28 08:29:59.787918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.588 [2024-11-28 08:29:59.787924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.588 [2024-11-28 08:29:59.787940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.588 qpair failed and we were unable to recover it. 00:31:02.588 [2024-11-28 08:29:59.797878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.588 [2024-11-28 08:29:59.797967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.588 [2024-11-28 08:29:59.797980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.588 [2024-11-28 08:29:59.797987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.588 [2024-11-28 08:29:59.797993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.588 [2024-11-28 08:29:59.798008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.588 qpair failed and we were unable to recover it. 00:31:02.588 [2024-11-28 08:29:59.807921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.588 [2024-11-28 08:29:59.807984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.588 [2024-11-28 08:29:59.807998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.588 [2024-11-28 08:29:59.808005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.588 [2024-11-28 08:29:59.808011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.588 [2024-11-28 08:29:59.808026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.588 qpair failed and we were unable to recover it. 00:31:02.588 [2024-11-28 08:29:59.817932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.588 [2024-11-28 08:29:59.817976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.588 [2024-11-28 08:29:59.817989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.588 [2024-11-28 08:29:59.817996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.588 [2024-11-28 08:29:59.818002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.588 [2024-11-28 08:29:59.818017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.588 qpair failed and we were unable to recover it. 00:31:02.588 [2024-11-28 08:29:59.827953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.588 [2024-11-28 08:29:59.827995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.588 [2024-11-28 08:29:59.828008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.588 [2024-11-28 08:29:59.828015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.588 [2024-11-28 08:29:59.828025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.588 [2024-11-28 08:29:59.828039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.588 qpair failed and we were unable to recover it. 00:31:02.588 [2024-11-28 08:29:59.837982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.588 [2024-11-28 08:29:59.838029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.588 [2024-11-28 08:29:59.838043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.588 [2024-11-28 08:29:59.838050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.588 [2024-11-28 08:29:59.838056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.588 [2024-11-28 08:29:59.838070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.588 qpair failed and we were unable to recover it. 00:31:02.588 [2024-11-28 08:29:59.847985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.588 [2024-11-28 08:29:59.848041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.588 [2024-11-28 08:29:59.848054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.588 [2024-11-28 08:29:59.848061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.588 [2024-11-28 08:29:59.848068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.588 [2024-11-28 08:29:59.848082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.588 qpair failed and we were unable to recover it. 00:31:02.588 [2024-11-28 08:29:59.858047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.588 [2024-11-28 08:29:59.858112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.588 [2024-11-28 08:29:59.858125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.589 [2024-11-28 08:29:59.858132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.589 [2024-11-28 08:29:59.858138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.589 [2024-11-28 08:29:59.858153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.589 qpair failed and we were unable to recover it. 00:31:02.589 [2024-11-28 08:29:59.868053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.589 [2024-11-28 08:29:59.868096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.589 [2024-11-28 08:29:59.868109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.589 [2024-11-28 08:29:59.868116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.589 [2024-11-28 08:29:59.868122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.589 [2024-11-28 08:29:59.868136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.589 qpair failed and we were unable to recover it. 00:31:02.852 [2024-11-28 08:29:59.878088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.852 [2024-11-28 08:29:59.878160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.852 [2024-11-28 08:29:59.878174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.852 [2024-11-28 08:29:59.878181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.852 [2024-11-28 08:29:59.878187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.852 [2024-11-28 08:29:59.878202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.852 qpair failed and we were unable to recover it. 00:31:02.852 [2024-11-28 08:29:59.888123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.852 [2024-11-28 08:29:59.888169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.852 [2024-11-28 08:29:59.888183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.852 [2024-11-28 08:29:59.888190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.852 [2024-11-28 08:29:59.888196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.852 [2024-11-28 08:29:59.888210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.852 qpair failed and we were unable to recover it. 00:31:02.852 [2024-11-28 08:29:59.898110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.852 [2024-11-28 08:29:59.898154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.852 [2024-11-28 08:29:59.898170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.852 [2024-11-28 08:29:59.898177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.852 [2024-11-28 08:29:59.898183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.852 [2024-11-28 08:29:59.898197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.852 qpair failed and we were unable to recover it. 00:31:02.852 [2024-11-28 08:29:59.908134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.852 [2024-11-28 08:29:59.908185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.852 [2024-11-28 08:29:59.908198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.852 [2024-11-28 08:29:59.908205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.852 [2024-11-28 08:29:59.908211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.852 [2024-11-28 08:29:59.908226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.852 qpair failed and we were unable to recover it. 00:31:02.852 [2024-11-28 08:29:59.918075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.852 [2024-11-28 08:29:59.918120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.852 [2024-11-28 08:29:59.918139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.852 [2024-11-28 08:29:59.918146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.852 [2024-11-28 08:29:59.918152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.852 [2024-11-28 08:29:59.918171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.852 qpair failed and we were unable to recover it. 00:31:02.852 [2024-11-28 08:29:59.928099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.852 [2024-11-28 08:29:59.928148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.852 [2024-11-28 08:29:59.928167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.852 [2024-11-28 08:29:59.928174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.852 [2024-11-28 08:29:59.928180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.852 [2024-11-28 08:29:59.928195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.852 qpair failed and we were unable to recover it. 00:31:02.852 [2024-11-28 08:29:59.938225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.852 [2024-11-28 08:29:59.938279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.852 [2024-11-28 08:29:59.938292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.852 [2024-11-28 08:29:59.938299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.852 [2024-11-28 08:29:59.938305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.852 [2024-11-28 08:29:59.938319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.852 qpair failed and we were unable to recover it. 00:31:02.852 [2024-11-28 08:29:59.948261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.852 [2024-11-28 08:29:59.948301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.852 [2024-11-28 08:29:59.948315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.852 [2024-11-28 08:29:59.948322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.852 [2024-11-28 08:29:59.948328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.852 [2024-11-28 08:29:59.948342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.853 qpair failed and we were unable to recover it. 00:31:02.853 [2024-11-28 08:29:59.958306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.853 [2024-11-28 08:29:59.958375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.853 [2024-11-28 08:29:59.958388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.853 [2024-11-28 08:29:59.958395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.853 [2024-11-28 08:29:59.958409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.853 [2024-11-28 08:29:59.958424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.853 qpair failed and we were unable to recover it. 00:31:02.853 [2024-11-28 08:29:59.968323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.853 [2024-11-28 08:29:59.968373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.853 [2024-11-28 08:29:59.968386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.853 [2024-11-28 08:29:59.968393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.853 [2024-11-28 08:29:59.968399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.853 [2024-11-28 08:29:59.968413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.853 qpair failed and we were unable to recover it. 00:31:02.853 [2024-11-28 08:29:59.978392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.853 [2024-11-28 08:29:59.978463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.853 [2024-11-28 08:29:59.978476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.853 [2024-11-28 08:29:59.978483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.853 [2024-11-28 08:29:59.978489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.853 [2024-11-28 08:29:59.978503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.853 qpair failed and we were unable to recover it. 00:31:02.853 [2024-11-28 08:29:59.988389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.853 [2024-11-28 08:29:59.988432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.853 [2024-11-28 08:29:59.988445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.853 [2024-11-28 08:29:59.988452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.853 [2024-11-28 08:29:59.988458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.853 [2024-11-28 08:29:59.988472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.853 qpair failed and we were unable to recover it. 00:31:02.853 [2024-11-28 08:29:59.998394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.853 [2024-11-28 08:29:59.998469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.853 [2024-11-28 08:29:59.998482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.853 [2024-11-28 08:29:59.998488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.853 [2024-11-28 08:29:59.998495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.853 [2024-11-28 08:29:59.998508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.853 qpair failed and we were unable to recover it. 00:31:02.853 [2024-11-28 08:30:00.008470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.853 [2024-11-28 08:30:00.008557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.853 [2024-11-28 08:30:00.008576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.853 [2024-11-28 08:30:00.008584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.853 [2024-11-28 08:30:00.008590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.853 [2024-11-28 08:30:00.008607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.853 qpair failed and we were unable to recover it. 00:31:02.853 [2024-11-28 08:30:00.018484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.853 [2024-11-28 08:30:00.018558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.853 [2024-11-28 08:30:00.018572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.853 [2024-11-28 08:30:00.018579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.853 [2024-11-28 08:30:00.018585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.853 [2024-11-28 08:30:00.018600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.853 qpair failed and we were unable to recover it. 00:31:02.853 [2024-11-28 08:30:00.028506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.853 [2024-11-28 08:30:00.028601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.853 [2024-11-28 08:30:00.028616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.853 [2024-11-28 08:30:00.028624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.853 [2024-11-28 08:30:00.028630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.853 [2024-11-28 08:30:00.028645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.853 qpair failed and we were unable to recover it. 00:31:02.853 [2024-11-28 08:30:00.038464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.853 [2024-11-28 08:30:00.038550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.853 [2024-11-28 08:30:00.038563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.853 [2024-11-28 08:30:00.038571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.853 [2024-11-28 08:30:00.038577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.853 [2024-11-28 08:30:00.038592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.853 qpair failed and we were unable to recover it. 00:31:02.853 [2024-11-28 08:30:00.048561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.853 [2024-11-28 08:30:00.048612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.853 [2024-11-28 08:30:00.048625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.853 [2024-11-28 08:30:00.048633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.853 [2024-11-28 08:30:00.048640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.853 [2024-11-28 08:30:00.048655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.853 qpair failed and we were unable to recover it. 00:31:02.853 [2024-11-28 08:30:00.058577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.853 [2024-11-28 08:30:00.058636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.853 [2024-11-28 08:30:00.058653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.853 [2024-11-28 08:30:00.058661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.853 [2024-11-28 08:30:00.058667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.853 [2024-11-28 08:30:00.058684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.853 qpair failed and we were unable to recover it. 00:31:02.853 [2024-11-28 08:30:00.068592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.853 [2024-11-28 08:30:00.068662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.853 [2024-11-28 08:30:00.068677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.853 [2024-11-28 08:30:00.068684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.853 [2024-11-28 08:30:00.068690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.853 [2024-11-28 08:30:00.068706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.853 qpair failed and we were unable to recover it. 00:31:02.853 [2024-11-28 08:30:00.078511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.853 [2024-11-28 08:30:00.078558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.854 [2024-11-28 08:30:00.078573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.854 [2024-11-28 08:30:00.078580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.854 [2024-11-28 08:30:00.078586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.854 [2024-11-28 08:30:00.078602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.854 qpair failed and we were unable to recover it. 00:31:02.854 [2024-11-28 08:30:00.088666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.854 [2024-11-28 08:30:00.088714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.854 [2024-11-28 08:30:00.088728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.854 [2024-11-28 08:30:00.088740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.854 [2024-11-28 08:30:00.088746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.854 [2024-11-28 08:30:00.088762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.854 qpair failed and we were unable to recover it. 00:31:02.854 [2024-11-28 08:30:00.098672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.854 [2024-11-28 08:30:00.098717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.854 [2024-11-28 08:30:00.098731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.854 [2024-11-28 08:30:00.098739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.854 [2024-11-28 08:30:00.098745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.854 [2024-11-28 08:30:00.098760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.854 qpair failed and we were unable to recover it. 00:31:02.854 [2024-11-28 08:30:00.108695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.854 [2024-11-28 08:30:00.108739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.854 [2024-11-28 08:30:00.108755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.854 [2024-11-28 08:30:00.108766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.854 [2024-11-28 08:30:00.108777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.854 [2024-11-28 08:30:00.108792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.854 qpair failed and we were unable to recover it. 00:31:02.854 [2024-11-28 08:30:00.118745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.854 [2024-11-28 08:30:00.118833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.854 [2024-11-28 08:30:00.118848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.854 [2024-11-28 08:30:00.118855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.854 [2024-11-28 08:30:00.118862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.854 [2024-11-28 08:30:00.118877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.854 qpair failed and we were unable to recover it. 00:31:02.854 [2024-11-28 08:30:00.128771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.854 [2024-11-28 08:30:00.128820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.854 [2024-11-28 08:30:00.128836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.854 [2024-11-28 08:30:00.128844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.854 [2024-11-28 08:30:00.128850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:02.854 [2024-11-28 08:30:00.128870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:02.854 qpair failed and we were unable to recover it. 00:31:03.118 [2024-11-28 08:30:00.138774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:03.118 [2024-11-28 08:30:00.138825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:03.118 [2024-11-28 08:30:00.138840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:03.118 [2024-11-28 08:30:00.138847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:03.118 [2024-11-28 08:30:00.138853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:03.118 [2024-11-28 08:30:00.138868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:03.118 qpair failed and we were unable to recover it. 00:31:03.118 [2024-11-28 08:30:00.148807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:03.118 [2024-11-28 08:30:00.148856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:03.118 [2024-11-28 08:30:00.148872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:03.118 [2024-11-28 08:30:00.148880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:03.118 [2024-11-28 08:30:00.148887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:03.118 [2024-11-28 08:30:00.148903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:03.118 qpair failed and we were unable to recover it. 00:31:03.118 [2024-11-28 08:30:00.158820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:03.118 [2024-11-28 08:30:00.158871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:03.118 [2024-11-28 08:30:00.158886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:03.118 [2024-11-28 08:30:00.158893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:03.118 [2024-11-28 08:30:00.158900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:03.118 [2024-11-28 08:30:00.158914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:03.118 qpair failed and we were unable to recover it. 00:31:03.118 [2024-11-28 08:30:00.168887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:03.118 [2024-11-28 08:30:00.168937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:03.118 [2024-11-28 08:30:00.168952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:03.118 [2024-11-28 08:30:00.168959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:03.118 [2024-11-28 08:30:00.168966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:03.118 [2024-11-28 08:30:00.168982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:03.118 qpair failed and we were unable to recover it. 00:31:03.118 [2024-11-28 08:30:00.178895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:03.118 [2024-11-28 08:30:00.178949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:03.118 [2024-11-28 08:30:00.178974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:03.118 [2024-11-28 08:30:00.178983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:03.118 [2024-11-28 08:30:00.178990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:03.118 [2024-11-28 08:30:00.179011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:03.118 qpair failed and we were unable to recover it. 00:31:03.118 [2024-11-28 08:30:00.188914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:03.118 [2024-11-28 08:30:00.188969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:03.118 [2024-11-28 08:30:00.188986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:03.118 [2024-11-28 08:30:00.188994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:03.118 [2024-11-28 08:30:00.189000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:03.118 [2024-11-28 08:30:00.189017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:03.118 qpair failed and we were unable to recover it. 00:31:03.118 [2024-11-28 08:30:00.198912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:03.118 [2024-11-28 08:30:00.198959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:03.118 [2024-11-28 08:30:00.198974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:03.118 [2024-11-28 08:30:00.198981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:03.118 [2024-11-28 08:30:00.198987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:03.119 [2024-11-28 08:30:00.199002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:03.119 qpair failed and we were unable to recover it. 00:31:03.119 [2024-11-28 08:30:00.208990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:03.119 [2024-11-28 08:30:00.209034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:03.119 [2024-11-28 08:30:00.209050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:03.119 [2024-11-28 08:30:00.209057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:03.119 [2024-11-28 08:30:00.209063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:03.119 [2024-11-28 08:30:00.209078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:03.119 qpair failed and we were unable to recover it. 00:31:03.119 [2024-11-28 08:30:00.218991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:03.119 [2024-11-28 08:30:00.219031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:03.119 [2024-11-28 08:30:00.219046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:03.119 [2024-11-28 08:30:00.219058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:03.119 [2024-11-28 08:30:00.219064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:03.119 [2024-11-28 08:30:00.219080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:03.119 qpair failed and we were unable to recover it. 00:31:03.119 [2024-11-28 08:30:00.229011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:03.119 [2024-11-28 08:30:00.229060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:03.119 [2024-11-28 08:30:00.229079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:03.119 [2024-11-28 08:30:00.229091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:03.119 [2024-11-28 08:30:00.229099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:03.119 [2024-11-28 08:30:00.229115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:03.119 qpair failed and we were unable to recover it. 00:31:03.119 [2024-11-28 08:30:00.239057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:03.119 [2024-11-28 08:30:00.239105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:03.119 [2024-11-28 08:30:00.239120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:03.119 [2024-11-28 08:30:00.239127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:03.119 [2024-11-28 08:30:00.239133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:03.119 [2024-11-28 08:30:00.239148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:03.119 qpair failed and we were unable to recover it. 00:31:03.119 [2024-11-28 08:30:00.249088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:03.119 [2024-11-28 08:30:00.249140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:03.119 [2024-11-28 08:30:00.249156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:03.119 [2024-11-28 08:30:00.249169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:03.119 [2024-11-28 08:30:00.249176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:03.119 [2024-11-28 08:30:00.249191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:03.119 qpair failed and we were unable to recover it. 00:31:03.119 [2024-11-28 08:30:00.259060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:03.119 [2024-11-28 08:30:00.259105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:03.119 [2024-11-28 08:30:00.259120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:03.119 [2024-11-28 08:30:00.259127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:03.119 [2024-11-28 08:30:00.259133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:03.119 [2024-11-28 08:30:00.259152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:03.119 qpair failed and we were unable to recover it. 00:31:03.119 [2024-11-28 08:30:00.269137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:03.119 [2024-11-28 08:30:00.269189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:03.119 [2024-11-28 08:30:00.269205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:03.119 [2024-11-28 08:30:00.269212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:03.119 [2024-11-28 08:30:00.269218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:03.119 [2024-11-28 08:30:00.269234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:03.119 qpair failed and we were unable to recover it. 00:31:03.119 [2024-11-28 08:30:00.279172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:03.119 [2024-11-28 08:30:00.279221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:03.119 [2024-11-28 08:30:00.279236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:03.119 [2024-11-28 08:30:00.279244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:03.119 [2024-11-28 08:30:00.279250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:03.119 [2024-11-28 08:30:00.279265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:03.119 qpair failed and we were unable to recover it. 00:31:03.119 [2024-11-28 08:30:00.289197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:03.119 [2024-11-28 08:30:00.289244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:03.119 [2024-11-28 08:30:00.289260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:03.119 [2024-11-28 08:30:00.289268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:03.119 [2024-11-28 08:30:00.289274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:03.119 [2024-11-28 08:30:00.289290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:03.119 qpair failed and we were unable to recover it. 00:31:03.119 [2024-11-28 08:30:00.299207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:03.119 [2024-11-28 08:30:00.299254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:03.119 [2024-11-28 08:30:00.299269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:03.119 [2024-11-28 08:30:00.299277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:03.119 [2024-11-28 08:30:00.299284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:03.119 [2024-11-28 08:30:00.299299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:03.119 qpair failed and we were unable to recover it. 00:31:03.119 [2024-11-28 08:30:00.309231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:03.119 [2024-11-28 08:30:00.309275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:03.119 [2024-11-28 08:30:00.309290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:03.119 [2024-11-28 08:30:00.309298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:03.119 [2024-11-28 08:30:00.309304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:03.119 [2024-11-28 08:30:00.309319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:03.119 qpair failed and we were unable to recover it. 00:31:03.119 [2024-11-28 08:30:00.319234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:03.119 [2024-11-28 08:30:00.319279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:03.119 [2024-11-28 08:30:00.319294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:03.119 [2024-11-28 08:30:00.319302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:03.119 [2024-11-28 08:30:00.319309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:03.119 [2024-11-28 08:30:00.319324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:03.119 qpair failed and we were unable to recover it. 00:31:03.119 [2024-11-28 08:30:00.329167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:03.119 [2024-11-28 08:30:00.329215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:03.119 [2024-11-28 08:30:00.329230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:03.119 [2024-11-28 08:30:00.329236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:03.119 [2024-11-28 08:30:00.329243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:03.120 [2024-11-28 08:30:00.329258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:03.120 qpair failed and we were unable to recover it. 00:31:03.120 [2024-11-28 08:30:00.339298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:03.120 [2024-11-28 08:30:00.339339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:03.120 [2024-11-28 08:30:00.339354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:03.120 [2024-11-28 08:30:00.339361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:03.120 [2024-11-28 08:30:00.339368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:03.120 [2024-11-28 08:30:00.339383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:03.120 qpair failed and we were unable to recover it. 00:31:03.120 [2024-11-28 08:30:00.349338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:03.120 [2024-11-28 08:30:00.349387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:03.120 [2024-11-28 08:30:00.349405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:03.120 [2024-11-28 08:30:00.349412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:03.120 [2024-11-28 08:30:00.349419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:03.120 [2024-11-28 08:30:00.349434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:03.120 qpair failed and we were unable to recover it. 00:31:03.120 [2024-11-28 08:30:00.359355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:03.120 [2024-11-28 08:30:00.359434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:03.120 [2024-11-28 08:30:00.359450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:03.120 [2024-11-28 08:30:00.359457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:03.120 [2024-11-28 08:30:00.359463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:03.120 [2024-11-28 08:30:00.359478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:03.120 qpair failed and we were unable to recover it. 00:31:03.120 [2024-11-28 08:30:00.369413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:03.120 [2024-11-28 08:30:00.369462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:03.120 [2024-11-28 08:30:00.369476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:03.120 [2024-11-28 08:30:00.369484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:03.120 [2024-11-28 08:30:00.369491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:03.120 [2024-11-28 08:30:00.369505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:03.120 qpair failed and we were unable to recover it. 00:31:03.120 [2024-11-28 08:30:00.379407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:03.120 [2024-11-28 08:30:00.379450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:03.120 [2024-11-28 08:30:00.379464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:03.120 [2024-11-28 08:30:00.379471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:03.120 [2024-11-28 08:30:00.379478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:03.120 [2024-11-28 08:30:00.379493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:03.120 qpair failed and we were unable to recover it. 00:31:03.120 [2024-11-28 08:30:00.389299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:03.120 [2024-11-28 08:30:00.389340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:03.120 [2024-11-28 08:30:00.389355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:03.120 [2024-11-28 08:30:00.389362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:03.120 [2024-11-28 08:30:00.389372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c0000b90 00:31:03.120 [2024-11-28 08:30:00.389387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:31:03.120 qpair failed and we were unable to recover it. 00:31:03.120 [2024-11-28 08:30:00.399494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:03.120 [2024-11-28 08:30:00.399627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:03.120 [2024-11-28 08:30:00.399692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:03.120 [2024-11-28 08:30:00.399719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:03.120 [2024-11-28 08:30:00.399740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68cc000b90 00:31:03.120 [2024-11-28 08:30:00.399794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:03.120 qpair failed and we were unable to recover it. 00:31:03.382 [2024-11-28 08:30:00.409502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:03.382 [2024-11-28 08:30:00.409573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:03.382 [2024-11-28 08:30:00.409610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:03.382 [2024-11-28 08:30:00.409629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:03.382 [2024-11-28 08:30:00.409647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68cc000b90 00:31:03.382 [2024-11-28 08:30:00.409685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:03.382 qpair failed and we were unable to recover it. 00:31:03.382 [2024-11-28 08:30:00.419543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:03.382 [2024-11-28 08:30:00.419679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:03.382 [2024-11-28 08:30:00.419744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:03.382 [2024-11-28 08:30:00.419770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:03.382 [2024-11-28 08:30:00.419791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18cc0c0 00:31:03.382 [2024-11-28 08:30:00.419845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:03.382 qpair failed and we were unable to recover it. 00:31:03.383 [2024-11-28 08:30:00.429534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:03.383 [2024-11-28 08:30:00.429617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:03.383 [2024-11-28 08:30:00.429671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:03.383 [2024-11-28 08:30:00.429691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:03.383 [2024-11-28 08:30:00.429708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18cc0c0 00:31:03.383 [2024-11-28 08:30:00.429750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:03.383 qpair failed and we were unable to recover it. 00:31:03.383 [2024-11-28 08:30:00.430187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c1e10 is same with the state(6) to be set 00:31:03.383 [2024-11-28 08:30:00.439607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:03.383 [2024-11-28 08:30:00.439719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:03.383 [2024-11-28 08:30:00.439783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:03.383 [2024-11-28 08:30:00.439810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:03.383 [2024-11-28 08:30:00.439831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c4000b90 00:31:03.383 [2024-11-28 08:30:00.439885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:03.383 qpair failed and we were unable to recover it. 00:31:03.383 [2024-11-28 08:30:00.449629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:03.383 [2024-11-28 08:30:00.449734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:03.383 [2024-11-28 08:30:00.449782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:03.383 [2024-11-28 08:30:00.449801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:03.383 [2024-11-28 08:30:00.449816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f68c4000b90 00:31:03.383 [2024-11-28 08:30:00.449856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:03.383 qpair failed and we were unable to recover it. 00:31:03.383 [2024-11-28 08:30:00.450328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c1e10 (9): Bad file descriptor 00:31:03.383 Initializing NVMe Controllers 00:31:03.383 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:03.383 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:03.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:31:03.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:31:03.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:31:03.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:31:03.383 Initialization complete. Launching workers. 00:31:03.383 Starting thread on core 1 00:31:03.383 Starting thread on core 2 00:31:03.383 Starting thread on core 3 00:31:03.383 Starting thread on core 0 00:31:03.383 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:31:03.383 00:31:03.383 real 0m11.448s 00:31:03.383 user 0m21.881s 00:31:03.383 sys 0m3.806s 00:31:03.383 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:03.383 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:03.383 ************************************ 00:31:03.383 END TEST nvmf_target_disconnect_tc2 00:31:03.383 ************************************ 00:31:03.383 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:31:03.383 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:31:03.383 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:31:03.383 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:03.383 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:31:03.383 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:03.383 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:31:03.383 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:03.383 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:03.383 rmmod nvme_tcp 00:31:03.383 rmmod nvme_fabrics 00:31:03.383 rmmod nvme_keyring 00:31:03.383 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:03.383 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:31:03.383 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:31:03.383 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2162037 ']' 00:31:03.383 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2162037 00:31:03.383 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2162037 ']' 00:31:03.383 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2162037 00:31:03.383 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:31:03.383 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:03.383 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2162037 00:31:03.383 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:31:03.383 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:31:03.383 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2162037' 00:31:03.383 killing process with pid 2162037 00:31:03.383 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2162037 00:31:03.383 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2162037 00:31:03.646 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:03.646 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:03.646 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:03.646 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:31:03.646 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:31:03.646 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:03.646 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:31:03.646 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:03.646 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:03.646 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:03.646 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:03.646 08:30:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.559 08:30:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:05.559 00:31:05.559 real 0m21.906s 00:31:05.559 user 0m49.742s 00:31:05.559 sys 0m10.073s 00:31:05.559 08:30:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:05.559 08:30:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:05.559 ************************************ 00:31:05.559 END TEST nvmf_target_disconnect 00:31:05.559 ************************************ 00:31:05.822 08:30:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:05.822 00:31:05.822 real 6m34.404s 00:31:05.822 user 11m27.375s 00:31:05.822 sys 2m15.787s 00:31:05.822 08:30:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:05.822 08:30:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.822 ************************************ 00:31:05.822 END TEST nvmf_host 00:31:05.822 ************************************ 00:31:05.822 08:30:02 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:31:05.822 08:30:02 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:31:05.822 08:30:02 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:31:05.822 08:30:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:05.822 08:30:02 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:05.822 08:30:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:05.822 ************************************ 00:31:05.822 START TEST nvmf_target_core_interrupt_mode 00:31:05.822 ************************************ 00:31:05.822 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:31:05.822 * Looking for test storage... 00:31:05.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:31:05.822 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:05.822 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:31:05.822 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:06.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.084 --rc genhtml_branch_coverage=1 00:31:06.084 --rc genhtml_function_coverage=1 00:31:06.084 --rc genhtml_legend=1 00:31:06.084 --rc geninfo_all_blocks=1 00:31:06.084 --rc geninfo_unexecuted_blocks=1 00:31:06.084 00:31:06.084 ' 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:06.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.084 --rc genhtml_branch_coverage=1 00:31:06.084 --rc genhtml_function_coverage=1 00:31:06.084 --rc genhtml_legend=1 00:31:06.084 --rc geninfo_all_blocks=1 00:31:06.084 --rc geninfo_unexecuted_blocks=1 00:31:06.084 00:31:06.084 ' 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:06.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.084 --rc genhtml_branch_coverage=1 00:31:06.084 --rc genhtml_function_coverage=1 00:31:06.084 --rc genhtml_legend=1 00:31:06.084 --rc geninfo_all_blocks=1 00:31:06.084 --rc geninfo_unexecuted_blocks=1 00:31:06.084 00:31:06.084 ' 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:06.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.084 --rc genhtml_branch_coverage=1 00:31:06.084 --rc genhtml_function_coverage=1 00:31:06.084 --rc genhtml_legend=1 00:31:06.084 --rc geninfo_all_blocks=1 00:31:06.084 --rc geninfo_unexecuted_blocks=1 00:31:06.084 00:31:06.084 ' 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:31:06.084 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:06.085 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:06.085 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:06.085 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.085 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.085 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.085 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:31:06.085 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.085 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:31:06.085 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:06.085 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:06.085 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:06.085 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:06.085 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:06.085 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:06.085 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:06.085 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:06.085 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:06.085 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:06.085 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:31:06.085 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:31:06.085 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:31:06.085 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:31:06.085 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:06.085 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:06.085 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:06.085 ************************************ 00:31:06.085 START TEST nvmf_abort 00:31:06.085 ************************************ 00:31:06.085 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:31:06.085 * Looking for test storage... 00:31:06.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:06.085 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:06.085 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:31:06.085 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:06.346 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:06.346 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:06.346 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:06.346 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:06.346 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:31:06.346 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:31:06.346 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:31:06.346 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:31:06.346 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:31:06.346 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:31:06.346 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:31:06.346 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:06.346 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:31:06.346 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:31:06.346 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:06.346 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:06.346 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:31:06.346 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:31:06.346 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:06.346 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:31:06.346 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:31:06.346 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:31:06.346 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:31:06.346 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:06.346 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:31:06.346 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:31:06.346 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:06.346 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:06.346 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:31:06.346 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:06.346 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:06.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.346 --rc genhtml_branch_coverage=1 00:31:06.347 --rc genhtml_function_coverage=1 00:31:06.347 --rc genhtml_legend=1 00:31:06.347 --rc geninfo_all_blocks=1 00:31:06.347 --rc geninfo_unexecuted_blocks=1 00:31:06.347 00:31:06.347 ' 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:06.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.347 --rc genhtml_branch_coverage=1 00:31:06.347 --rc genhtml_function_coverage=1 00:31:06.347 --rc genhtml_legend=1 00:31:06.347 --rc geninfo_all_blocks=1 00:31:06.347 --rc geninfo_unexecuted_blocks=1 00:31:06.347 00:31:06.347 ' 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:06.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.347 --rc genhtml_branch_coverage=1 00:31:06.347 --rc genhtml_function_coverage=1 00:31:06.347 --rc genhtml_legend=1 00:31:06.347 --rc geninfo_all_blocks=1 00:31:06.347 --rc geninfo_unexecuted_blocks=1 00:31:06.347 00:31:06.347 ' 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:06.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.347 --rc genhtml_branch_coverage=1 00:31:06.347 --rc genhtml_function_coverage=1 00:31:06.347 --rc genhtml_legend=1 00:31:06.347 --rc geninfo_all_blocks=1 00:31:06.347 --rc geninfo_unexecuted_blocks=1 00:31:06.347 00:31:06.347 ' 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:06.347 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:31:06.348 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:14.491 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:14.491 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:14.491 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:14.491 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:14.491 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:14.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:14.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:14.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:14.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:14.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:14.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:14.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:14.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:14.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:14.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:14.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:14.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:14.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:14.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:14.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:14.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:14.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:14.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:14.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:14.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:14.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.685 ms 00:31:14.492 00:31:14.492 --- 10.0.0.2 ping statistics --- 00:31:14.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:14.492 rtt min/avg/max/mdev = 0.685/0.685/0.685/0.000 ms 00:31:14.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:14.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:14.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:31:14.492 00:31:14.492 --- 10.0.0.1 ping statistics --- 00:31:14.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:14.492 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:31:14.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:14.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:31:14.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:14.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:14.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:14.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:14.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:14.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:14.492 08:30:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:14.492 08:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:31:14.492 08:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:14.492 08:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:14.492 08:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:14.492 08:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2168100 00:31:14.492 08:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2168100 00:31:14.492 08:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:31:14.492 08:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2168100 ']' 00:31:14.492 08:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:14.492 08:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:14.492 08:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:14.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:14.492 08:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:14.492 08:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:14.492 [2024-11-28 08:30:11.118677] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:14.492 [2024-11-28 08:30:11.119832] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:31:14.492 [2024-11-28 08:30:11.119883] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:14.492 [2024-11-28 08:30:11.220704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:14.492 [2024-11-28 08:30:11.272425] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:14.492 [2024-11-28 08:30:11.272479] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:14.492 [2024-11-28 08:30:11.272488] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:14.492 [2024-11-28 08:30:11.272495] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:14.492 [2024-11-28 08:30:11.272501] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:14.492 [2024-11-28 08:30:11.274356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:14.492 [2024-11-28 08:30:11.274603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:14.492 [2024-11-28 08:30:11.274604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:14.492 [2024-11-28 08:30:11.352794] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:14.492 [2024-11-28 08:30:11.353680] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:14.492 [2024-11-28 08:30:11.354117] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:14.492 [2024-11-28 08:30:11.354289] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:14.754 08:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:14.754 08:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:31:14.754 08:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:14.754 08:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:14.754 08:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:14.754 08:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:14.754 08:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:31:14.754 08:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.754 08:30:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:14.754 [2024-11-28 08:30:11.995666] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:14.754 08:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.754 08:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:31:14.754 08:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.754 08:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:15.016 Malloc0 00:31:15.016 08:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.016 08:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:15.016 08:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.016 08:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:15.016 Delay0 00:31:15.016 08:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.016 08:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:15.016 08:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.016 08:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:15.016 08:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.016 08:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:31:15.016 08:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.016 08:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:15.016 08:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.016 08:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:15.016 08:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.016 08:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:15.016 [2024-11-28 08:30:12.103638] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:15.016 08:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.016 08:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:15.016 08:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.016 08:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:15.016 08:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.016 08:30:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:31:15.016 [2024-11-28 08:30:12.203836] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:17.563 Initializing NVMe Controllers 00:31:17.563 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:17.564 controller IO queue size 128 less than required 00:31:17.564 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:31:17.564 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:31:17.564 Initialization complete. Launching workers. 00:31:17.564 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28743 00:31:17.564 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28800, failed to submit 66 00:31:17.564 success 28743, unsuccessful 57, failed 0 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:17.564 rmmod nvme_tcp 00:31:17.564 rmmod nvme_fabrics 00:31:17.564 rmmod nvme_keyring 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2168100 ']' 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2168100 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2168100 ']' 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2168100 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2168100 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2168100' 00:31:17.564 killing process with pid 2168100 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2168100 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2168100 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:17.564 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.479 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:19.479 00:31:19.479 real 0m13.404s 00:31:19.479 user 0m10.632s 00:31:19.479 sys 0m7.071s 00:31:19.479 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:19.479 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:19.479 ************************************ 00:31:19.479 END TEST nvmf_abort 00:31:19.479 ************************************ 00:31:19.479 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:31:19.479 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:19.479 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:19.479 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:19.479 ************************************ 00:31:19.479 START TEST nvmf_ns_hotplug_stress 00:31:19.479 ************************************ 00:31:19.479 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:31:19.741 * Looking for test storage... 00:31:19.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:19.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.741 --rc genhtml_branch_coverage=1 00:31:19.741 --rc genhtml_function_coverage=1 00:31:19.741 --rc genhtml_legend=1 00:31:19.741 --rc geninfo_all_blocks=1 00:31:19.741 --rc geninfo_unexecuted_blocks=1 00:31:19.741 00:31:19.741 ' 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:19.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.741 --rc genhtml_branch_coverage=1 00:31:19.741 --rc genhtml_function_coverage=1 00:31:19.741 --rc genhtml_legend=1 00:31:19.741 --rc geninfo_all_blocks=1 00:31:19.741 --rc geninfo_unexecuted_blocks=1 00:31:19.741 00:31:19.741 ' 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:19.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.741 --rc genhtml_branch_coverage=1 00:31:19.741 --rc genhtml_function_coverage=1 00:31:19.741 --rc genhtml_legend=1 00:31:19.741 --rc geninfo_all_blocks=1 00:31:19.741 --rc geninfo_unexecuted_blocks=1 00:31:19.741 00:31:19.741 ' 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:19.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.741 --rc genhtml_branch_coverage=1 00:31:19.741 --rc genhtml_function_coverage=1 00:31:19.741 --rc genhtml_legend=1 00:31:19.741 --rc geninfo_all_blocks=1 00:31:19.741 --rc geninfo_unexecuted_blocks=1 00:31:19.741 00:31:19.741 ' 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:19.741 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:31:19.742 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:28.001 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:28.001 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:31:28.001 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:28.001 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:28.001 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:28.001 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:28.001 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:28.001 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:31:28.001 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:28.001 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:31:28.001 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:31:28.001 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:31:28.001 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:31:28.001 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:31:28.001 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:31:28.001 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:28.001 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:28.001 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:28.001 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:28.001 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:28.001 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:28.001 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:28.002 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:28.002 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:28.002 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:28.002 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:28.002 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:28.002 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:28.002 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:28.002 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:28.002 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:28.002 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:28.002 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:28.002 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:28.002 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:28.002 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:28.002 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:28.002 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:28.002 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.002 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.002 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:28.002 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:28.002 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:28.002 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:28.002 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:28.002 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:28.002 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.002 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.002 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:28.002 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:28.002 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:28.003 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:28.003 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:28.003 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.003 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:28.003 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:28.003 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:28.003 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:28.003 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.003 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:28.003 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:28.003 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.003 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:28.003 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.003 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:28.003 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:28.003 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:28.003 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:28.003 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.003 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:28.003 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:28.003 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.003 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:28.003 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:31:28.003 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:28.003 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:28.004 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:28.004 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:28.004 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:28.004 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:28.004 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:28.004 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:28.004 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:28.004 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:28.004 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:28.004 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:28.004 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:28.004 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:28.004 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:28.004 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:28.004 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:28.004 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:28.004 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:28.005 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:28.005 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:28.005 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:28.006 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:28.006 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:28.006 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:28.006 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:28.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:28.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:31:28.006 00:31:28.006 --- 10.0.0.2 ping statistics --- 00:31:28.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.006 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:31:28.006 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:28.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:28.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:31:28.006 00:31:28.006 --- 10.0.0.1 ping statistics --- 00:31:28.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.006 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:31:28.006 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:28.006 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:31:28.006 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:28.006 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:28.006 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:28.006 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:28.006 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:28.006 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:28.006 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:28.006 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:31:28.006 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:28.006 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:28.006 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:28.006 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2172854 00:31:28.006 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2172854 00:31:28.006 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:31:28.006 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2172854 ']' 00:31:28.006 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:28.007 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:28.007 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:28.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:28.007 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:28.007 08:30:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:28.007 [2024-11-28 08:30:24.520571] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:28.007 [2024-11-28 08:30:24.521714] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:31:28.007 [2024-11-28 08:30:24.521769] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:28.007 [2024-11-28 08:30:24.621384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:28.007 [2024-11-28 08:30:24.674695] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:28.007 [2024-11-28 08:30:24.674747] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:28.007 [2024-11-28 08:30:24.674756] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:28.007 [2024-11-28 08:30:24.674764] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:28.007 [2024-11-28 08:30:24.674771] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:28.007 [2024-11-28 08:30:24.676666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:28.007 [2024-11-28 08:30:24.676826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:28.007 [2024-11-28 08:30:24.676826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:28.007 [2024-11-28 08:30:24.761049] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:28.007 [2024-11-28 08:30:24.762123] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:28.007 [2024-11-28 08:30:24.762536] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:28.007 [2024-11-28 08:30:24.762672] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:28.274 08:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:28.274 08:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:31:28.274 08:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:28.274 08:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:28.274 08:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:28.274 08:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:28.274 08:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:31:28.274 08:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:28.274 [2024-11-28 08:30:25.545716] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:28.535 08:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:28.535 08:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:28.796 [2024-11-28 08:30:25.910382] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:28.796 08:30:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:29.057 08:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:31:29.057 Malloc0 00:31:29.057 08:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:29.319 Delay0 00:31:29.319 08:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:29.580 08:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:31:29.580 NULL1 00:31:29.842 08:30:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:31:29.842 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2173543 00:31:29.842 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:29.842 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:31:29.842 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:30.103 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:30.365 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:31:30.365 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:31:30.365 true 00:31:30.626 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:30.626 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:30.626 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:30.888 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:31:30.888 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:31:31.152 true 00:31:31.152 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:31.152 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:31.413 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:31.674 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:31:31.674 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:31:31.674 true 00:31:31.674 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:31.674 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:31.936 08:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:32.197 08:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:31:32.197 08:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:31:32.197 true 00:31:32.457 08:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:32.457 08:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:32.457 08:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:32.719 08:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:31:32.719 08:30:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:31:32.980 true 00:31:32.980 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:32.980 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:32.980 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:33.241 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:31:33.241 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:31:33.502 true 00:31:33.502 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:33.502 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:33.502 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:33.762 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:31:33.762 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:31:34.023 true 00:31:34.023 08:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:34.023 08:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:34.284 08:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:34.284 08:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:31:34.284 08:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:31:34.544 true 00:31:34.544 08:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:34.544 08:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:34.804 08:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:34.804 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:31:34.804 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:31:35.066 true 00:31:35.066 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:35.066 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:35.327 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:35.590 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:31:35.590 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:31:35.590 true 00:31:35.590 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:35.590 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:35.850 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:36.111 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:31:36.111 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:31:36.111 true 00:31:36.111 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:36.111 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:36.372 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:36.632 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:31:36.632 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:31:36.632 true 00:31:36.632 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:36.894 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:36.894 08:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:37.155 08:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:31:37.155 08:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:31:37.416 true 00:31:37.416 08:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:37.416 08:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:37.416 08:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:37.677 08:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:31:37.677 08:30:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:31:37.941 true 00:31:37.941 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:37.941 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:38.200 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:38.200 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:31:38.200 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:31:38.460 true 00:31:38.460 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:38.460 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:38.721 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:38.721 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:31:38.721 08:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:31:38.982 true 00:31:38.982 08:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:38.982 08:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:39.243 08:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:39.504 08:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:31:39.504 08:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:31:39.504 true 00:31:39.504 08:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:39.504 08:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:39.768 08:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:40.035 08:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:31:40.035 08:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:31:40.035 true 00:31:40.035 08:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:40.035 08:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:40.296 08:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:40.556 08:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:31:40.556 08:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:31:40.556 true 00:31:40.556 08:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:40.556 08:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:40.817 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:41.079 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:31:41.079 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:31:41.079 true 00:31:41.342 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:41.342 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:41.342 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:41.604 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:31:41.604 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:31:41.865 true 00:31:41.865 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:41.865 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:41.865 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:42.126 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:31:42.126 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:31:42.387 true 00:31:42.387 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:42.387 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:42.649 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:42.649 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:31:42.649 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:31:42.910 true 00:31:42.910 08:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:42.911 08:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:43.172 08:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:43.173 08:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:31:43.173 08:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:31:43.434 true 00:31:43.434 08:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:43.434 08:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:43.696 08:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:43.958 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:31:43.958 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:31:43.958 true 00:31:43.958 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:43.958 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:44.219 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:44.479 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:31:44.480 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:31:44.480 true 00:31:44.480 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:44.480 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:44.740 08:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:45.000 08:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:31:45.000 08:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:31:45.000 true 00:31:45.262 08:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:45.262 08:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:45.523 08:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:45.523 08:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:31:45.523 08:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:31:45.784 true 00:31:45.784 08:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:45.784 08:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:46.044 08:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:46.044 08:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:31:46.044 08:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:31:46.305 true 00:31:46.305 08:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:46.305 08:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:46.566 08:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:46.826 08:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:31:46.826 08:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:31:46.826 true 00:31:46.826 08:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:46.826 08:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:47.087 08:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:47.347 08:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:31:47.348 08:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:31:47.348 true 00:31:47.609 08:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:47.609 08:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:47.609 08:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:47.870 08:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:31:47.871 08:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:31:48.131 true 00:31:48.131 08:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:48.131 08:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:48.131 08:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:48.391 08:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:31:48.391 08:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:31:48.651 true 00:31:48.651 08:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:48.651 08:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:48.911 08:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:48.911 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:31:48.911 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:31:49.171 true 00:31:49.171 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:49.171 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:49.431 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:49.431 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:31:49.431 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:31:49.690 true 00:31:49.690 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:49.690 08:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:49.950 08:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:50.209 08:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:31:50.209 08:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:31:50.209 true 00:31:50.209 08:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:50.209 08:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:50.468 08:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:50.728 08:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:31:50.728 08:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:31:50.728 true 00:31:50.728 08:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:50.728 08:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:50.987 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:51.247 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:31:51.247 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:31:51.247 true 00:31:51.247 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:51.247 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:51.506 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:51.767 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:31:51.767 08:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:31:51.767 true 00:31:51.767 08:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:51.767 08:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:52.027 08:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:52.287 08:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:31:52.287 08:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:31:52.287 true 00:31:52.549 08:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:52.549 08:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:52.549 08:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:52.810 08:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:31:52.810 08:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:31:53.071 true 00:31:53.071 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:53.071 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:53.071 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:53.332 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:31:53.332 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:31:53.592 true 00:31:53.592 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:53.592 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:53.853 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:53.854 08:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:31:53.854 08:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:31:54.114 true 00:31:54.114 08:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:54.114 08:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:54.375 08:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:54.636 08:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:31:54.636 08:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:31:54.636 true 00:31:54.636 08:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:54.636 08:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:54.897 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:55.158 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:31:55.158 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:31:55.158 true 00:31:55.158 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:55.158 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:55.419 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:55.680 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:31:55.680 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:31:55.680 true 00:31:55.680 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:55.680 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:55.940 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:56.201 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:31:56.201 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:31:56.201 true 00:31:56.462 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:56.462 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:56.462 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:56.722 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:31:56.722 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:31:56.722 true 00:31:56.982 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:56.982 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:56.982 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:57.243 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:31:57.243 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:31:57.504 true 00:31:57.504 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:57.504 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:57.504 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:57.763 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:31:57.763 08:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:31:58.028 true 00:31:58.028 08:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:58.028 08:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:58.291 08:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:58.291 08:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:31:58.291 08:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:31:58.551 true 00:31:58.551 08:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:58.552 08:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:58.812 08:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:58.812 08:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:31:58.812 08:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:31:59.072 true 00:31:59.072 08:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:59.072 08:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:59.332 08:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:59.594 08:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:31:59.594 08:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:31:59.594 true 00:31:59.594 08:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:31:59.594 08:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:59.855 08:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:00.116 08:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:32:00.116 08:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:32:00.116 Initializing NVMe Controllers 00:32:00.116 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:00.116 Controller IO queue size 128, less than required. 00:32:00.116 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:00.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:00.116 Initialization complete. Launching workers. 00:32:00.116 ======================================================== 00:32:00.116 Latency(us) 00:32:00.116 Device Information : IOPS MiB/s Average min max 00:32:00.116 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30091.10 14.69 4253.79 1128.55 11492.89 00:32:00.116 ======================================================== 00:32:00.116 Total : 30091.10 14.69 4253.79 1128.55 11492.89 00:32:00.116 00:32:00.116 true 00:32:00.116 08:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2173543 00:32:00.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2173543) - No such process 00:32:00.116 08:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2173543 00:32:00.116 08:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:00.378 08:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:00.639 08:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:32:00.639 08:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:32:00.639 08:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:32:00.639 08:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:00.639 08:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:32:00.639 null0 00:32:00.639 08:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:00.639 08:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:00.639 08:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:32:00.900 null1 00:32:00.900 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:00.900 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:00.900 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:32:01.161 null2 00:32:01.161 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:01.161 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:01.161 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:32:01.161 null3 00:32:01.161 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:01.161 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:01.161 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:32:01.421 null4 00:32:01.421 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:01.421 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:01.421 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:32:01.682 null5 00:32:01.682 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:01.682 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:01.682 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:32:01.682 null6 00:32:01.682 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:01.682 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:01.682 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:32:01.944 null7 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2179708 2179709 2179712 2179713 2179715 2179717 2179719 2179721 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:01.944 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:02.205 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:02.205 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:02.205 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:02.205 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:02.205 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:02.205 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:02.205 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:02.205 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:02.205 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.205 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.205 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:02.205 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.205 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.205 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:02.205 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.205 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.205 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:02.205 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.205 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.205 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:02.466 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.466 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.466 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:02.466 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.466 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.466 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:02.466 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.466 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.466 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:02.466 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.466 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.466 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:02.466 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:02.466 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:02.466 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:02.466 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:02.466 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:02.466 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:02.466 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:02.466 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:02.728 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.728 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.728 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:02.728 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.728 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.728 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:02.728 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.728 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.728 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:02.728 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.728 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.728 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:02.728 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.728 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.728 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:02.728 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.728 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.728 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:02.728 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.728 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.728 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:02.728 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.728 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.728 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:02.728 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:02.729 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:02.990 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:02.990 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:02.990 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:02.990 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:02.990 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:02.990 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:02.990 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.990 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.990 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:02.990 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.990 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.990 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:02.990 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.990 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.990 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:02.990 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.990 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.990 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:02.990 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.990 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.990 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:02.990 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.990 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.990 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.990 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:02.990 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.990 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:03.250 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.250 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.250 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:03.250 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:03.250 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:03.250 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:03.250 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:03.250 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:03.250 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.250 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.250 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:03.250 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:03.250 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.250 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.250 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:03.250 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:03.250 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:03.512 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.512 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.512 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:03.512 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.512 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.512 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:03.512 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:03.512 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.512 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.512 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:03.512 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.512 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.512 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:03.512 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.512 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.512 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:03.512 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:03.512 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:03.512 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.512 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.512 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:03.512 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:03.772 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:03.772 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.772 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.772 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:03.772 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:03.772 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:03.772 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.772 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.772 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:03.772 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.772 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.772 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:03.772 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:03.773 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.773 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.773 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:03.773 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.773 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.773 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:03.773 08:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:03.773 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.773 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.773 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:03.773 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.773 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.773 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:03.773 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:04.034 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:04.034 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.034 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.034 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:04.034 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:04.034 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:04.034 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.034 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:04.034 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.034 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:04.034 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:04.034 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.034 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.034 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:04.034 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.034 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.034 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:04.034 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:04.294 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.294 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.294 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:04.294 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.294 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.294 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:04.294 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.294 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.294 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:04.294 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.294 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.294 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:04.294 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:04.294 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:04.295 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:04.295 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.295 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.295 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:04.295 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:04.295 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:04.295 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.295 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.295 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:04.295 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:04.295 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.295 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.295 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:04.295 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.295 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.295 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:04.556 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:04.556 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:04.556 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.556 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.556 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:04.556 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.556 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.556 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.556 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.556 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:04.556 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:04.556 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:04.556 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:04.556 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.556 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.556 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:04.556 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:04.818 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.818 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.818 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:04.818 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:04.818 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:04.818 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.818 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.818 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:04.818 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.818 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.818 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:04.818 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:04.818 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.818 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.818 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:04.818 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:04.818 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:04.818 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.818 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.818 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:04.818 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:04.818 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.818 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.818 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:05.080 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:05.080 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.080 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.080 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:05.080 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.080 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.080 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:05.080 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:05.080 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.080 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.080 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:05.080 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:05.080 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.080 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.080 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:05.080 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.080 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.080 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:05.080 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:05.080 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.080 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.080 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:05.080 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:05.080 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:05.080 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:05.340 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.340 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.340 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:05.340 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:05.340 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:05.340 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.340 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.340 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:05.340 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.340 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.340 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:05.340 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.340 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.340 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:05.340 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:05.340 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.340 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.340 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:05.340 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:05.601 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.601 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.601 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:05.601 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:05.601 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.601 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.601 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.601 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.601 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:05.601 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:05.601 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.601 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.601 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.601 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.601 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.601 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.601 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.601 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.907 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.907 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.907 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:32:05.907 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:32:05.907 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:05.907 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:32:05.907 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:05.907 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:32:05.907 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:05.907 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:05.907 rmmod nvme_tcp 00:32:05.907 rmmod nvme_fabrics 00:32:05.907 rmmod nvme_keyring 00:32:05.907 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:05.907 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:32:05.907 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:32:05.907 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2172854 ']' 00:32:05.907 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2172854 00:32:05.907 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2172854 ']' 00:32:05.907 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2172854 00:32:05.907 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:32:05.907 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:05.907 08:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2172854 00:32:05.907 08:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:05.907 08:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:05.907 08:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2172854' 00:32:05.907 killing process with pid 2172854 00:32:05.907 08:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2172854 00:32:05.907 08:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2172854 00:32:06.260 08:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:06.260 08:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:06.260 08:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:06.260 08:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:32:06.260 08:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:32:06.260 08:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:06.260 08:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:32:06.260 08:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:06.260 08:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:06.260 08:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:06.260 08:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:06.260 08:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:08.179 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:08.179 00:32:08.179 real 0m48.538s 00:32:08.179 user 3m1.420s 00:32:08.179 sys 0m22.400s 00:32:08.179 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:08.179 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:08.179 ************************************ 00:32:08.179 END TEST nvmf_ns_hotplug_stress 00:32:08.179 ************************************ 00:32:08.179 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:32:08.179 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:08.179 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:08.179 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:08.179 ************************************ 00:32:08.179 START TEST nvmf_delete_subsystem 00:32:08.179 ************************************ 00:32:08.179 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:32:08.179 * Looking for test storage... 00:32:08.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:08.441 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:08.441 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:32:08.441 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:08.441 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:08.441 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:08.441 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:08.441 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:08.441 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:32:08.441 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:32:08.441 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:32:08.441 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:32:08.441 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:32:08.441 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:32:08.441 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:08.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:08.442 --rc genhtml_branch_coverage=1 00:32:08.442 --rc genhtml_function_coverage=1 00:32:08.442 --rc genhtml_legend=1 00:32:08.442 --rc geninfo_all_blocks=1 00:32:08.442 --rc geninfo_unexecuted_blocks=1 00:32:08.442 00:32:08.442 ' 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:08.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:08.442 --rc genhtml_branch_coverage=1 00:32:08.442 --rc genhtml_function_coverage=1 00:32:08.442 --rc genhtml_legend=1 00:32:08.442 --rc geninfo_all_blocks=1 00:32:08.442 --rc geninfo_unexecuted_blocks=1 00:32:08.442 00:32:08.442 ' 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:08.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:08.442 --rc genhtml_branch_coverage=1 00:32:08.442 --rc genhtml_function_coverage=1 00:32:08.442 --rc genhtml_legend=1 00:32:08.442 --rc geninfo_all_blocks=1 00:32:08.442 --rc geninfo_unexecuted_blocks=1 00:32:08.442 00:32:08.442 ' 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:08.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:08.442 --rc genhtml_branch_coverage=1 00:32:08.442 --rc genhtml_function_coverage=1 00:32:08.442 --rc genhtml_legend=1 00:32:08.442 --rc geninfo_all_blocks=1 00:32:08.442 --rc geninfo_unexecuted_blocks=1 00:32:08.442 00:32:08.442 ' 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:08.442 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:08.443 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:08.443 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:08.443 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:08.443 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:08.443 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:08.443 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:08.443 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:32:08.443 08:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:16.595 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:16.595 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:16.595 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:16.595 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:16.596 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:16.596 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:16.596 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:16.596 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:16.596 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:16.596 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:16.596 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:16.596 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:16.596 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:16.596 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:32:16.596 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:16.596 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:16.596 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:16.596 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:16.596 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:16.596 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:16.596 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:16.596 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:16.596 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:16.596 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:16.596 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:16.596 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:16.596 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:16.596 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:16.596 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:16.596 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:16.596 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:16.596 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:16.596 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:16.596 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:16.596 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:16.596 08:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:16.596 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:16.596 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:16.596 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:16.596 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:16.596 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:16.596 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:32:16.596 00:32:16.596 --- 10.0.0.2 ping statistics --- 00:32:16.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:16.596 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:32:16.596 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:16.596 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:16.596 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:32:16.596 00:32:16.596 --- 10.0.0.1 ping statistics --- 00:32:16.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:16.596 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:32:16.596 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:16.596 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:32:16.596 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:16.596 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:16.596 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:16.596 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:16.596 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:16.596 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:16.596 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:16.596 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:32:16.596 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:16.596 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:16.596 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:16.596 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2184802 00:32:16.596 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2184802 00:32:16.596 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:16.596 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2184802 ']' 00:32:16.596 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:16.596 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:16.596 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:16.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:16.596 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:16.596 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:16.596 [2024-11-28 08:31:13.160333] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:16.596 [2024-11-28 08:31:13.161453] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:32:16.596 [2024-11-28 08:31:13.161503] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:16.596 [2024-11-28 08:31:13.260726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:16.596 [2024-11-28 08:31:13.312510] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:16.596 [2024-11-28 08:31:13.312562] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:16.596 [2024-11-28 08:31:13.312570] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:16.596 [2024-11-28 08:31:13.312578] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:16.596 [2024-11-28 08:31:13.312584] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:16.596 [2024-11-28 08:31:13.314212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:16.596 [2024-11-28 08:31:13.314266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:16.596 [2024-11-28 08:31:13.392100] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:16.596 [2024-11-28 08:31:13.392728] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:16.596 [2024-11-28 08:31:13.393021] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:16.858 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:16.858 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:32:16.858 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:16.858 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:16.858 08:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:16.858 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:16.858 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:16.858 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.858 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:16.858 [2024-11-28 08:31:14.027348] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:16.858 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.858 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:16.858 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.858 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:16.858 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.858 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:16.858 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.858 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:16.858 [2024-11-28 08:31:14.059948] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:16.858 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.858 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:32:16.858 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.858 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:16.858 NULL1 00:32:16.858 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.858 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:16.858 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.858 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:16.858 Delay0 00:32:16.858 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.858 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:16.858 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.858 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:16.858 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.858 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2184906 00:32:16.858 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:32:16.858 08:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:32:17.119 [2024-11-28 08:31:14.184978] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:32:19.032 08:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:19.032 08:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:19.032 08:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:19.294 Write completed with error (sct=0, sc=8) 00:32:19.294 Write completed with error (sct=0, sc=8) 00:32:19.295 starting I/O failed: -6 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 starting I/O failed: -6 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 starting I/O failed: -6 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 starting I/O failed: -6 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 starting I/O failed: -6 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 starting I/O failed: -6 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 starting I/O failed: -6 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 starting I/O failed: -6 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 starting I/O failed: -6 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 starting I/O failed: -6 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 starting I/O failed: -6 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 [2024-11-28 08:31:16.399371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891680 is same with the state(6) to be set 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 starting I/O failed: -6 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 starting I/O failed: -6 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 starting I/O failed: -6 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 starting I/O failed: -6 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 starting I/O failed: -6 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 starting I/O failed: -6 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 starting I/O failed: -6 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 starting I/O failed: -6 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 starting I/O failed: -6 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 [2024-11-28 08:31:16.403636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f75a800d680 is same with the state(6) to be set 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Write completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:19.295 Read completed with error (sct=0, sc=8) 00:32:20.241 [2024-11-28 08:31:17.365470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8929b0 is same with the state(6) to be set 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Write completed with error (sct=0, sc=8) 00:32:20.241 Write completed with error (sct=0, sc=8) 00:32:20.241 Write completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Write completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Write completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Write completed with error (sct=0, sc=8) 00:32:20.241 Write completed with error (sct=0, sc=8) 00:32:20.241 Write completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 [2024-11-28 08:31:17.402996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8914a0 is same with the state(6) to be set 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Write completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Write completed with error (sct=0, sc=8) 00:32:20.241 Write completed with error (sct=0, sc=8) 00:32:20.241 Write completed with error (sct=0, sc=8) 00:32:20.241 Write completed with error (sct=0, sc=8) 00:32:20.241 Write completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Write completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Write completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 [2024-11-28 08:31:17.403454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x891860 is same with the state(6) to be set 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Write completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Write completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Write completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Write completed with error (sct=0, sc=8) 00:32:20.241 [2024-11-28 08:31:17.404581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f75a8000c40 is same with the state(6) to be set 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Write completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Write completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Write completed with error (sct=0, sc=8) 00:32:20.241 Write completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 Read completed with error (sct=0, sc=8) 00:32:20.241 [2024-11-28 08:31:17.405272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f75a800d350 is same with the state(6) to be set 00:32:20.241 Initializing NVMe Controllers 00:32:20.241 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:20.241 Controller IO queue size 128, less than required. 00:32:20.241 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:20.241 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:20.241 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:20.241 Initialization complete. Launching workers. 00:32:20.241 ======================================================== 00:32:20.242 Latency(us) 00:32:20.242 Device Information : IOPS MiB/s Average min max 00:32:20.242 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 165.76 0.08 904517.29 363.58 1006982.34 00:32:20.242 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 147.84 0.07 973058.08 289.35 2001248.90 00:32:20.242 ======================================================== 00:32:20.242 Total : 313.60 0.15 936829.38 289.35 2001248.90 00:32:20.242 00:32:20.242 [2024-11-28 08:31:17.405862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8929b0 (9): Bad file descriptor 00:32:20.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:32:20.242 08:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.242 08:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:32:20.242 08:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2184906 00:32:20.242 08:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:32:20.815 08:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:32:20.815 08:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2184906 00:32:20.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2184906) - No such process 00:32:20.815 08:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2184906 00:32:20.815 08:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:32:20.815 08:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2184906 00:32:20.816 08:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:32:20.816 08:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:20.816 08:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:32:20.816 08:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:20.816 08:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2184906 00:32:20.816 08:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:32:20.816 08:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:20.816 08:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:20.816 08:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:20.816 08:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:20.816 08:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.816 08:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:20.816 08:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.816 08:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:20.816 08:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.816 08:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:20.816 [2024-11-28 08:31:17.939587] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:20.816 08:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.816 08:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:20.816 08:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.816 08:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:20.816 08:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.816 08:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2185590 00:32:20.816 08:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:32:20.816 08:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2185590 00:32:20.816 08:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:32:20.816 08:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:20.816 [2024-11-28 08:31:18.039567] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:32:21.388 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:21.388 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2185590 00:32:21.388 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:21.960 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:21.960 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2185590 00:32:21.960 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:22.221 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:22.221 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2185590 00:32:22.221 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:22.793 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:22.793 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2185590 00:32:22.793 08:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:23.364 08:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:23.364 08:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2185590 00:32:23.364 08:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:23.935 08:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:23.935 08:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2185590 00:32:23.935 08:31:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:23.935 Initializing NVMe Controllers 00:32:23.935 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:23.935 Controller IO queue size 128, less than required. 00:32:23.935 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:23.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:23.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:23.935 Initialization complete. Launching workers. 00:32:23.935 ======================================================== 00:32:23.935 Latency(us) 00:32:23.935 Device Information : IOPS MiB/s Average min max 00:32:23.935 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003483.67 1000251.34 1011308.34 00:32:23.935 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004227.51 1000155.72 1011670.76 00:32:23.935 ======================================================== 00:32:23.935 Total : 256.00 0.12 1003855.59 1000155.72 1011670.76 00:32:23.935 00:32:24.506 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2185590 00:32:24.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2185590) - No such process 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2185590 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:24.507 rmmod nvme_tcp 00:32:24.507 rmmod nvme_fabrics 00:32:24.507 rmmod nvme_keyring 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2184802 ']' 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2184802 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2184802 ']' 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2184802 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2184802 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2184802' 00:32:24.507 killing process with pid 2184802 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2184802 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2184802 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:24.507 08:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:27.060 08:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:27.060 00:32:27.060 real 0m18.446s 00:32:27.060 user 0m26.876s 00:32:27.060 sys 0m7.421s 00:32:27.060 08:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:27.060 08:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:27.060 ************************************ 00:32:27.060 END TEST nvmf_delete_subsystem 00:32:27.060 ************************************ 00:32:27.060 08:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:32:27.060 08:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:27.060 08:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:27.060 08:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:27.060 ************************************ 00:32:27.060 START TEST nvmf_host_management 00:32:27.060 ************************************ 00:32:27.060 08:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:32:27.060 * Looking for test storage... 00:32:27.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:27.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.061 --rc genhtml_branch_coverage=1 00:32:27.061 --rc genhtml_function_coverage=1 00:32:27.061 --rc genhtml_legend=1 00:32:27.061 --rc geninfo_all_blocks=1 00:32:27.061 --rc geninfo_unexecuted_blocks=1 00:32:27.061 00:32:27.061 ' 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:27.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.061 --rc genhtml_branch_coverage=1 00:32:27.061 --rc genhtml_function_coverage=1 00:32:27.061 --rc genhtml_legend=1 00:32:27.061 --rc geninfo_all_blocks=1 00:32:27.061 --rc geninfo_unexecuted_blocks=1 00:32:27.061 00:32:27.061 ' 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:27.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.061 --rc genhtml_branch_coverage=1 00:32:27.061 --rc genhtml_function_coverage=1 00:32:27.061 --rc genhtml_legend=1 00:32:27.061 --rc geninfo_all_blocks=1 00:32:27.061 --rc geninfo_unexecuted_blocks=1 00:32:27.061 00:32:27.061 ' 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:27.061 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.061 --rc genhtml_branch_coverage=1 00:32:27.061 --rc genhtml_function_coverage=1 00:32:27.061 --rc genhtml_legend=1 00:32:27.061 --rc geninfo_all_blocks=1 00:32:27.061 --rc geninfo_unexecuted_blocks=1 00:32:27.061 00:32:27.061 ' 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:27.061 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:27.062 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:27.062 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:27.062 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:27.062 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:27.062 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:27.062 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:27.062 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:27.062 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:27.062 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:32:27.062 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:27.062 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:27.062 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:27.062 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:27.062 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:27.062 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:27.062 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:27.062 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:27.062 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:27.062 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:27.062 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:32:27.062 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:35.210 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:35.210 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:32:35.210 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:35.211 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:35.211 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:35.211 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:35.211 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:35.211 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:35.212 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:35.212 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:35.212 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:35.212 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:35.212 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:35.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:35.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:32:35.212 00:32:35.212 --- 10.0.0.2 ping statistics --- 00:32:35.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:35.212 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:32:35.212 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:35.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:35.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:32:35.212 00:32:35.212 --- 10.0.0.1 ping statistics --- 00:32:35.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:35.212 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:32:35.212 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:35.212 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:32:35.212 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:35.212 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:35.212 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:35.212 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:35.212 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:35.212 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:35.212 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:35.212 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:32:35.212 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:32:35.212 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:32:35.212 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:35.212 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:35.212 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:35.212 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2190571 00:32:35.212 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2190571 00:32:35.212 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:32:35.212 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2190571 ']' 00:32:35.212 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:35.212 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:35.212 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:35.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:35.212 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:35.212 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:35.212 [2024-11-28 08:31:31.683628] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:35.212 [2024-11-28 08:31:31.684763] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:32:35.212 [2024-11-28 08:31:31.684816] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:35.212 [2024-11-28 08:31:31.783462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:35.212 [2024-11-28 08:31:31.837033] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:35.212 [2024-11-28 08:31:31.837083] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:35.212 [2024-11-28 08:31:31.837091] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:35.212 [2024-11-28 08:31:31.837098] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:35.212 [2024-11-28 08:31:31.837104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:35.212 [2024-11-28 08:31:31.839449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:35.212 [2024-11-28 08:31:31.839611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:35.212 [2024-11-28 08:31:31.839745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:35.212 [2024-11-28 08:31:31.839745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:35.212 [2024-11-28 08:31:31.918272] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:35.212 [2024-11-28 08:31:31.919184] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:35.212 [2024-11-28 08:31:31.919448] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:35.212 [2024-11-28 08:31:31.920014] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:35.212 [2024-11-28 08:31:31.920061] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:35.473 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:35.473 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:32:35.473 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:35.473 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:35.473 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:35.473 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:35.473 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:35.473 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.473 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:35.473 [2024-11-28 08:31:32.544515] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:35.473 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.473 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:32:35.473 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:35.473 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:35.473 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:35.473 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:32:35.473 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:32:35.473 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:35.473 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:35.473 Malloc0 00:32:35.473 [2024-11-28 08:31:32.645076] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:35.473 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:35.473 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:32:35.473 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:35.473 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:35.473 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2190674 00:32:35.473 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2190674 /var/tmp/bdevperf.sock 00:32:35.473 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2190674 ']' 00:32:35.473 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:35.474 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:35.474 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:32:35.474 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:32:35.474 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:35.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:35.474 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:35.474 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:32:35.474 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:35.474 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:32:35.474 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:35.474 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:35.474 { 00:32:35.474 "params": { 00:32:35.474 "name": "Nvme$subsystem", 00:32:35.474 "trtype": "$TEST_TRANSPORT", 00:32:35.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:35.474 "adrfam": "ipv4", 00:32:35.474 "trsvcid": "$NVMF_PORT", 00:32:35.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:35.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:35.474 "hdgst": ${hdgst:-false}, 00:32:35.474 "ddgst": ${ddgst:-false} 00:32:35.474 }, 00:32:35.474 "method": "bdev_nvme_attach_controller" 00:32:35.474 } 00:32:35.474 EOF 00:32:35.474 )") 00:32:35.474 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:32:35.474 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:32:35.474 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:32:35.474 08:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:35.474 "params": { 00:32:35.474 "name": "Nvme0", 00:32:35.474 "trtype": "tcp", 00:32:35.474 "traddr": "10.0.0.2", 00:32:35.474 "adrfam": "ipv4", 00:32:35.474 "trsvcid": "4420", 00:32:35.474 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:35.474 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:35.474 "hdgst": false, 00:32:35.474 "ddgst": false 00:32:35.474 }, 00:32:35.474 "method": "bdev_nvme_attach_controller" 00:32:35.474 }' 00:32:35.735 [2024-11-28 08:31:32.765826] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:32:35.735 [2024-11-28 08:31:32.765896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2190674 ] 00:32:35.735 [2024-11-28 08:31:32.859148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:35.735 [2024-11-28 08:31:32.912919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:35.996 Running I/O for 10 seconds... 00:32:36.571 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:36.571 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:32:36.571 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:32:36.571 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.571 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:36.571 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.571 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:36.571 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:32:36.571 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:32:36.572 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:32:36.572 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:32:36.572 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:32:36.572 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:32:36.572 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:32:36.572 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:32:36.572 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:32:36.572 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.572 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:36.572 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.572 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=398 00:32:36.572 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 398 -ge 100 ']' 00:32:36.572 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:32:36.572 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:32:36.572 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:32:36.572 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:36.572 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.572 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:36.572 [2024-11-28 08:31:33.652440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.652961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aef20 is same with the state(6) to be set 00:32:36.572 [2024-11-28 08:31:33.653493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.572 [2024-11-28 08:31:33.653551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.572 [2024-11-28 08:31:33.653583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.572 [2024-11-28 08:31:33.653592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.653603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.653610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.653621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.653629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.653639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.653646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.653656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.653664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.653674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.653681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.653691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.653698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.653708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.653716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.653725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.653733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.653742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.653750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.653760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.653767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.653777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.653785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.653794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.653805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.653815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.653822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.653832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.653840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.653850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.653858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.653868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.653875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.653884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.653892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.653901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.653909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.653918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.653925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.653935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.653942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.653952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.653959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.653969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.653976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.653986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.653993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.654003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.654011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.654022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.654030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.654039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.654048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.654057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.654065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.654074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.654081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.654091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.654098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.654107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.654115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.654124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.654133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.654143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.654151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.654171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.654179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.654189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.654197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.654208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.654216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.654225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.654234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.654244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.654253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.654264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.654271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.654281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.573 [2024-11-28 08:31:33.654289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.573 [2024-11-28 08:31:33.654298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.574 [2024-11-28 08:31:33.654306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.574 [2024-11-28 08:31:33.654316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.574 [2024-11-28 08:31:33.654324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.574 [2024-11-28 08:31:33.654334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.574 [2024-11-28 08:31:33.654341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.574 [2024-11-28 08:31:33.654351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.574 [2024-11-28 08:31:33.654358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.574 [2024-11-28 08:31:33.654368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.574 [2024-11-28 08:31:33.654376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.574 [2024-11-28 08:31:33.654386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.574 [2024-11-28 08:31:33.654393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.574 [2024-11-28 08:31:33.654403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.574 [2024-11-28 08:31:33.654410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.574 [2024-11-28 08:31:33.654420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.574 [2024-11-28 08:31:33.654428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.574 [2024-11-28 08:31:33.654437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.574 [2024-11-28 08:31:33.654444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.574 [2024-11-28 08:31:33.654454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.574 [2024-11-28 08:31:33.654462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.574 [2024-11-28 08:31:33.654472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.574 [2024-11-28 08:31:33.654482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.574 [2024-11-28 08:31:33.654492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.574 [2024-11-28 08:31:33.654499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.574 [2024-11-28 08:31:33.654509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.574 [2024-11-28 08:31:33.654517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.574 [2024-11-28 08:31:33.654527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.574 [2024-11-28 08:31:33.654534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.574 [2024-11-28 08:31:33.654544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.574 [2024-11-28 08:31:33.654552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.574 [2024-11-28 08:31:33.654562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.574 [2024-11-28 08:31:33.654569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.574 [2024-11-28 08:31:33.654579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.574 [2024-11-28 08:31:33.654586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.574 [2024-11-28 08:31:33.654596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.574 [2024-11-28 08:31:33.654603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.574 [2024-11-28 08:31:33.654613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.574 [2024-11-28 08:31:33.654620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.574 [2024-11-28 08:31:33.654630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.574 [2024-11-28 08:31:33.654637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.574 [2024-11-28 08:31:33.654647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.574 [2024-11-28 08:31:33.654655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.574 [2024-11-28 08:31:33.654664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.574 [2024-11-28 08:31:33.654671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.574 [2024-11-28 08:31:33.654681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:36.574 [2024-11-28 08:31:33.654688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.574 [2024-11-28 08:31:33.654699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9eee0 is same with the state(6) to be set 00:32:36.574 [2024-11-28 08:31:33.656004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:36.574 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.574 task offset: 57344 on job bdev=Nvme0n1 fails 00:32:36.574 00:32:36.574 Latency(us) 00:32:36.574 [2024-11-28T07:31:33.863Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:36.574 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:36.574 Job: Nvme0n1 ended in about 0.37 seconds with error 00:32:36.574 Verification LBA range: start 0x0 length 0x400 00:32:36.574 Nvme0n1 : 0.37 1202.75 75.17 171.82 0.00 45052.59 10103.47 38010.88 00:32:36.574 [2024-11-28T07:31:33.863Z] =================================================================================================================== 00:32:36.574 [2024-11-28T07:31:33.863Z] Total : 1202.75 75.17 171.82 0.00 45052.59 10103.47 38010.88 00:32:36.574 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:36.574 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:36.574 [2024-11-28 08:31:33.658323] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:36.574 [2024-11-28 08:31:33.658368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1886010 (9): Bad file descriptor 00:32:36.574 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:36.574 [2024-11-28 08:31:33.659990] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:32:36.574 [2024-11-28 08:31:33.660078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:32:36.574 [2024-11-28 08:31:33.660106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:36.574 [2024-11-28 08:31:33.660121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:32:36.574 [2024-11-28 08:31:33.660130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:32:36.574 [2024-11-28 08:31:33.660140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:36.574 [2024-11-28 08:31:33.660148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1886010 00:32:36.574 [2024-11-28 08:31:33.660180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1886010 (9): Bad file descriptor 00:32:36.574 [2024-11-28 08:31:33.660195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:36.574 [2024-11-28 08:31:33.660203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:36.574 [2024-11-28 08:31:33.660213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:36.574 [2024-11-28 08:31:33.660226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:36.574 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:36.574 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:32:37.518 08:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2190674 00:32:37.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2190674) - No such process 00:32:37.518 08:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:32:37.518 08:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:32:37.518 08:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:32:37.518 08:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:32:37.518 08:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:32:37.519 08:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:32:37.519 08:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:37.519 08:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:37.519 { 00:32:37.519 "params": { 00:32:37.519 "name": "Nvme$subsystem", 00:32:37.519 "trtype": "$TEST_TRANSPORT", 00:32:37.519 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:37.519 "adrfam": "ipv4", 00:32:37.519 "trsvcid": "$NVMF_PORT", 00:32:37.519 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:37.519 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:37.519 "hdgst": ${hdgst:-false}, 00:32:37.519 "ddgst": ${ddgst:-false} 00:32:37.519 }, 00:32:37.519 "method": "bdev_nvme_attach_controller" 00:32:37.519 } 00:32:37.519 EOF 00:32:37.519 )") 00:32:37.519 08:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:32:37.519 08:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:32:37.519 08:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:32:37.519 08:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:37.519 "params": { 00:32:37.519 "name": "Nvme0", 00:32:37.519 "trtype": "tcp", 00:32:37.519 "traddr": "10.0.0.2", 00:32:37.519 "adrfam": "ipv4", 00:32:37.519 "trsvcid": "4420", 00:32:37.519 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:37.519 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:37.519 "hdgst": false, 00:32:37.519 "ddgst": false 00:32:37.519 }, 00:32:37.519 "method": "bdev_nvme_attach_controller" 00:32:37.519 }' 00:32:37.519 [2024-11-28 08:31:34.732156] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:32:37.519 [2024-11-28 08:31:34.732242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2191083 ] 00:32:37.779 [2024-11-28 08:31:34.825010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:37.779 [2024-11-28 08:31:34.879419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:38.038 Running I/O for 1 seconds... 00:32:38.976 1810.00 IOPS, 113.12 MiB/s 00:32:38.976 Latency(us) 00:32:38.976 [2024-11-28T07:31:36.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:38.976 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:38.976 Verification LBA range: start 0x0 length 0x400 00:32:38.976 Nvme0n1 : 1.05 1787.84 111.74 0.00 0.00 33831.11 2334.72 42379.95 00:32:38.976 [2024-11-28T07:31:36.265Z] =================================================================================================================== 00:32:38.976 [2024-11-28T07:31:36.265Z] Total : 1787.84 111.74 0.00 0.00 33831.11 2334.72 42379.95 00:32:38.976 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:32:38.976 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:32:38.976 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:32:38.976 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:39.237 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:32:39.237 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:39.237 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:32:39.237 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:39.237 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:32:39.237 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:39.237 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:39.237 rmmod nvme_tcp 00:32:39.237 rmmod nvme_fabrics 00:32:39.237 rmmod nvme_keyring 00:32:39.237 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:39.237 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:32:39.237 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:32:39.237 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2190571 ']' 00:32:39.237 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2190571 00:32:39.237 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2190571 ']' 00:32:39.237 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2190571 00:32:39.237 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:32:39.237 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:39.237 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2190571 00:32:39.237 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:39.237 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:39.237 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2190571' 00:32:39.237 killing process with pid 2190571 00:32:39.237 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2190571 00:32:39.237 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2190571 00:32:39.237 [2024-11-28 08:31:36.516759] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:32:39.498 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:39.498 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:39.498 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:39.498 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:32:39.498 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:32:39.498 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:32:39.498 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:39.498 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:39.498 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:39.498 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.498 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:39.498 08:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:41.411 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:41.411 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:32:41.411 00:32:41.411 real 0m14.735s 00:32:41.411 user 0m19.617s 00:32:41.411 sys 0m7.453s 00:32:41.411 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:41.411 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:41.411 ************************************ 00:32:41.411 END TEST nvmf_host_management 00:32:41.411 ************************************ 00:32:41.411 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:41.411 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:41.411 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:41.411 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:41.673 ************************************ 00:32:41.673 START TEST nvmf_lvol 00:32:41.673 ************************************ 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:41.673 * Looking for test storage... 00:32:41.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:41.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.673 --rc genhtml_branch_coverage=1 00:32:41.673 --rc genhtml_function_coverage=1 00:32:41.673 --rc genhtml_legend=1 00:32:41.673 --rc geninfo_all_blocks=1 00:32:41.673 --rc geninfo_unexecuted_blocks=1 00:32:41.673 00:32:41.673 ' 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:41.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.673 --rc genhtml_branch_coverage=1 00:32:41.673 --rc genhtml_function_coverage=1 00:32:41.673 --rc genhtml_legend=1 00:32:41.673 --rc geninfo_all_blocks=1 00:32:41.673 --rc geninfo_unexecuted_blocks=1 00:32:41.673 00:32:41.673 ' 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:41.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.673 --rc genhtml_branch_coverage=1 00:32:41.673 --rc genhtml_function_coverage=1 00:32:41.673 --rc genhtml_legend=1 00:32:41.673 --rc geninfo_all_blocks=1 00:32:41.673 --rc geninfo_unexecuted_blocks=1 00:32:41.673 00:32:41.673 ' 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:41.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:41.673 --rc genhtml_branch_coverage=1 00:32:41.673 --rc genhtml_function_coverage=1 00:32:41.673 --rc genhtml_legend=1 00:32:41.673 --rc geninfo_all_blocks=1 00:32:41.673 --rc geninfo_unexecuted_blocks=1 00:32:41.673 00:32:41.673 ' 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:41.673 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.674 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.674 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.674 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:32:41.674 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:41.674 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:32:41.674 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:41.674 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:41.674 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:41.674 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:41.674 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:41.674 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:41.674 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:41.674 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:41.674 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:41.674 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:41.674 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:41.674 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:41.674 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:32:41.674 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:32:41.674 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:41.674 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:32:41.674 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:41.674 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:41.674 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:41.674 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:41.674 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:41.674 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:41.674 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:41.674 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:41.934 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:41.934 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:41.934 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:32:41.934 08:31:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:50.078 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:50.079 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:50.079 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:50.079 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:50.079 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:50.079 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:50.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:50.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:32:50.080 00:32:50.080 --- 10.0.0.2 ping statistics --- 00:32:50.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:50.080 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:50.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:50.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:32:50.080 00:32:50.080 --- 10.0.0.1 ping statistics --- 00:32:50.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:50.080 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2195632 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2195632 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2195632 ']' 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:50.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:50.080 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:50.080 [2024-11-28 08:31:46.562396] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:50.080 [2024-11-28 08:31:46.563539] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:32:50.080 [2024-11-28 08:31:46.563587] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:50.080 [2024-11-28 08:31:46.662973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:50.080 [2024-11-28 08:31:46.714829] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:50.080 [2024-11-28 08:31:46.714886] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:50.080 [2024-11-28 08:31:46.714894] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:50.080 [2024-11-28 08:31:46.714902] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:50.080 [2024-11-28 08:31:46.714908] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:50.080 [2024-11-28 08:31:46.716772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:50.080 [2024-11-28 08:31:46.716932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:50.080 [2024-11-28 08:31:46.716932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:50.080 [2024-11-28 08:31:46.795002] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:50.080 [2024-11-28 08:31:46.795889] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:50.080 [2024-11-28 08:31:46.796372] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:50.080 [2024-11-28 08:31:46.796533] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:50.342 08:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:50.342 08:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:32:50.342 08:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:50.342 08:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:50.342 08:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:50.342 08:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:50.342 08:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:50.342 [2024-11-28 08:31:47.605842] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:50.604 08:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:50.604 08:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:32:50.864 08:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:50.864 08:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:32:50.864 08:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:32:51.125 08:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:32:51.387 08:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=70a2bf91-93fb-43e3-944f-3c67586fee4e 00:32:51.387 08:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 70a2bf91-93fb-43e3-944f-3c67586fee4e lvol 20 00:32:51.387 08:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=c18d0dbd-80fe-44ba-b3c1-ffec42797613 00:32:51.387 08:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:51.649 08:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c18d0dbd-80fe-44ba-b3c1-ffec42797613 00:32:51.911 08:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:51.911 [2024-11-28 08:31:49.173814] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:52.172 08:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:52.172 08:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2196181 00:32:52.172 08:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:32:52.172 08:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:32:53.559 08:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot c18d0dbd-80fe-44ba-b3c1-ffec42797613 MY_SNAPSHOT 00:32:53.559 08:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2e7d3040-a5d9-4f78-a4fc-d761ec608dda 00:32:53.559 08:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize c18d0dbd-80fe-44ba-b3c1-ffec42797613 30 00:32:53.820 08:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2e7d3040-a5d9-4f78-a4fc-d761ec608dda MY_CLONE 00:32:54.081 08:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c1d705c0-9695-4707-bdd9-434f8516e7a1 00:32:54.081 08:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c1d705c0-9695-4707-bdd9-434f8516e7a1 00:32:54.342 08:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2196181 00:33:04.344 Initializing NVMe Controllers 00:33:04.344 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:33:04.344 Controller IO queue size 128, less than required. 00:33:04.344 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:04.344 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:33:04.344 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:33:04.344 Initialization complete. Launching workers. 00:33:04.344 ======================================================== 00:33:04.345 Latency(us) 00:33:04.345 Device Information : IOPS MiB/s Average min max 00:33:04.345 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 14727.90 57.53 8691.71 795.28 79914.16 00:33:04.345 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15079.60 58.90 8489.90 1664.21 82490.13 00:33:04.345 ======================================================== 00:33:04.345 Total : 29807.50 116.44 8589.61 795.28 82490.13 00:33:04.345 00:33:04.345 [2024-11-28 08:31:59.920753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5e830 is same with the state(6) to be set 00:33:04.345 08:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c18d0dbd-80fe-44ba-b3c1-ffec42797613 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 70a2bf91-93fb-43e3-944f-3c67586fee4e 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:04.345 rmmod nvme_tcp 00:33:04.345 rmmod nvme_fabrics 00:33:04.345 rmmod nvme_keyring 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2195632 ']' 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2195632 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2195632 ']' 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2195632 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2195632 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2195632' 00:33:04.345 killing process with pid 2195632 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2195632 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2195632 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:04.345 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:05.736 08:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:05.736 00:33:05.736 real 0m24.115s 00:33:05.736 user 0m56.674s 00:33:05.736 sys 0m10.754s 00:33:05.736 08:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:05.736 08:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:05.736 ************************************ 00:33:05.736 END TEST nvmf_lvol 00:33:05.736 ************************************ 00:33:05.736 08:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:33:05.736 08:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:05.736 08:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:05.736 08:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:05.736 ************************************ 00:33:05.736 START TEST nvmf_lvs_grow 00:33:05.736 ************************************ 00:33:05.736 08:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:33:05.736 * Looking for test storage... 00:33:05.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:05.736 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:05.736 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:33:05.736 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:05.996 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:05.996 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:05.996 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:05.996 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:05.996 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:33:05.996 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:33:05.996 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:33:05.996 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:33:05.996 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:33:05.996 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:33:05.996 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:33:05.996 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:05.996 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:33:05.996 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:33:05.996 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:05.996 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:05.996 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:33:05.996 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:33:05.996 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:05.996 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:33:05.996 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:33:05.996 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:33:05.996 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:33:05.996 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:05.996 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:33:05.996 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:33:05.996 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:05.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.997 --rc genhtml_branch_coverage=1 00:33:05.997 --rc genhtml_function_coverage=1 00:33:05.997 --rc genhtml_legend=1 00:33:05.997 --rc geninfo_all_blocks=1 00:33:05.997 --rc geninfo_unexecuted_blocks=1 00:33:05.997 00:33:05.997 ' 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:05.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.997 --rc genhtml_branch_coverage=1 00:33:05.997 --rc genhtml_function_coverage=1 00:33:05.997 --rc genhtml_legend=1 00:33:05.997 --rc geninfo_all_blocks=1 00:33:05.997 --rc geninfo_unexecuted_blocks=1 00:33:05.997 00:33:05.997 ' 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:05.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.997 --rc genhtml_branch_coverage=1 00:33:05.997 --rc genhtml_function_coverage=1 00:33:05.997 --rc genhtml_legend=1 00:33:05.997 --rc geninfo_all_blocks=1 00:33:05.997 --rc geninfo_unexecuted_blocks=1 00:33:05.997 00:33:05.997 ' 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:05.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:05.997 --rc genhtml_branch_coverage=1 00:33:05.997 --rc genhtml_function_coverage=1 00:33:05.997 --rc genhtml_legend=1 00:33:05.997 --rc geninfo_all_blocks=1 00:33:05.997 --rc geninfo_unexecuted_blocks=1 00:33:05.997 00:33:05.997 ' 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:33:05.997 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:14.208 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:14.208 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:14.208 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:14.208 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:14.208 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:14.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:14.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:33:14.209 00:33:14.209 --- 10.0.0.2 ping statistics --- 00:33:14.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:14.209 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:14.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:14.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:33:14.209 00:33:14.209 --- 10.0.0.1 ping statistics --- 00:33:14.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:14.209 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2202360 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2202360 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2202360 ']' 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:14.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:14.209 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:14.209 [2024-11-28 08:32:10.787554] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:14.209 [2024-11-28 08:32:10.788684] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:33:14.209 [2024-11-28 08:32:10.788737] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:14.209 [2024-11-28 08:32:10.887497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:14.209 [2024-11-28 08:32:10.939536] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:14.209 [2024-11-28 08:32:10.939590] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:14.209 [2024-11-28 08:32:10.939599] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:14.209 [2024-11-28 08:32:10.939606] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:14.209 [2024-11-28 08:32:10.939613] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:14.209 [2024-11-28 08:32:10.940371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:14.209 [2024-11-28 08:32:11.018008] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:14.209 [2024-11-28 08:32:11.018314] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:14.536 08:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:14.536 08:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:33:14.536 08:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:14.536 08:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:14.536 08:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:14.536 08:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:14.536 08:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:14.798 [2024-11-28 08:32:11.817262] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:14.798 08:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:33:14.798 08:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:14.798 08:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:14.798 08:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:14.798 ************************************ 00:33:14.798 START TEST lvs_grow_clean 00:33:14.798 ************************************ 00:33:14.798 08:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:33:14.798 08:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:33:14.798 08:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:33:14.798 08:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:33:14.798 08:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:33:14.798 08:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:33:14.798 08:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:33:14.798 08:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:14.798 08:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:14.798 08:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:15.060 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:33:15.060 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:33:15.060 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=58194faf-0b7e-4cb3-bb3c-01f691928f90 00:33:15.060 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58194faf-0b7e-4cb3-bb3c-01f691928f90 00:33:15.060 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:33:15.322 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:33:15.322 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:33:15.322 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 58194faf-0b7e-4cb3-bb3c-01f691928f90 lvol 150 00:33:15.583 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=dd736c40-ced7-49d5-a494-4cc4bed31cab 00:33:15.583 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:15.583 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:33:15.583 [2024-11-28 08:32:12.860918] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:33:15.583 [2024-11-28 08:32:12.861083] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:33:15.583 true 00:33:15.843 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58194faf-0b7e-4cb3-bb3c-01f691928f90 00:33:15.843 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:33:15.843 08:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:33:15.843 08:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:16.103 08:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dd736c40-ced7-49d5-a494-4cc4bed31cab 00:33:16.364 08:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:16.364 [2024-11-28 08:32:13.569611] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:16.364 08:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:16.625 08:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2203059 00:33:16.625 08:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:16.625 08:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:33:16.625 08:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2203059 /var/tmp/bdevperf.sock 00:33:16.625 08:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2203059 ']' 00:33:16.625 08:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:16.625 08:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:16.625 08:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:16.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:16.625 08:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:16.625 08:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:33:16.625 [2024-11-28 08:32:13.834755] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:33:16.625 [2024-11-28 08:32:13.834828] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2203059 ] 00:33:16.887 [2024-11-28 08:32:13.925710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:16.887 [2024-11-28 08:32:13.977412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:17.460 08:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:17.460 08:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:33:17.460 08:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:33:18.041 Nvme0n1 00:33:18.041 08:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:33:18.041 [ 00:33:18.041 { 00:33:18.041 "name": "Nvme0n1", 00:33:18.041 "aliases": [ 00:33:18.041 "dd736c40-ced7-49d5-a494-4cc4bed31cab" 00:33:18.041 ], 00:33:18.041 "product_name": "NVMe disk", 00:33:18.041 "block_size": 4096, 00:33:18.041 "num_blocks": 38912, 00:33:18.041 "uuid": "dd736c40-ced7-49d5-a494-4cc4bed31cab", 00:33:18.041 "numa_id": 0, 00:33:18.041 "assigned_rate_limits": { 00:33:18.041 "rw_ios_per_sec": 0, 00:33:18.041 "rw_mbytes_per_sec": 0, 00:33:18.041 "r_mbytes_per_sec": 0, 00:33:18.041 "w_mbytes_per_sec": 0 00:33:18.041 }, 00:33:18.041 "claimed": false, 00:33:18.041 "zoned": false, 00:33:18.041 "supported_io_types": { 00:33:18.041 "read": true, 00:33:18.041 "write": true, 00:33:18.042 "unmap": true, 00:33:18.042 "flush": true, 00:33:18.042 "reset": true, 00:33:18.042 "nvme_admin": true, 00:33:18.042 "nvme_io": true, 00:33:18.042 "nvme_io_md": false, 00:33:18.042 "write_zeroes": true, 00:33:18.042 "zcopy": false, 00:33:18.042 "get_zone_info": false, 00:33:18.042 "zone_management": false, 00:33:18.042 "zone_append": false, 00:33:18.042 "compare": true, 00:33:18.042 "compare_and_write": true, 00:33:18.042 "abort": true, 00:33:18.042 "seek_hole": false, 00:33:18.042 "seek_data": false, 00:33:18.042 "copy": true, 00:33:18.042 "nvme_iov_md": false 00:33:18.042 }, 00:33:18.042 "memory_domains": [ 00:33:18.042 { 00:33:18.042 "dma_device_id": "system", 00:33:18.042 "dma_device_type": 1 00:33:18.042 } 00:33:18.042 ], 00:33:18.042 "driver_specific": { 00:33:18.042 "nvme": [ 00:33:18.042 { 00:33:18.042 "trid": { 00:33:18.042 "trtype": "TCP", 00:33:18.042 "adrfam": "IPv4", 00:33:18.042 "traddr": "10.0.0.2", 00:33:18.042 "trsvcid": "4420", 00:33:18.042 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:33:18.042 }, 00:33:18.042 "ctrlr_data": { 00:33:18.042 "cntlid": 1, 00:33:18.042 "vendor_id": "0x8086", 00:33:18.042 "model_number": "SPDK bdev Controller", 00:33:18.042 "serial_number": "SPDK0", 00:33:18.042 "firmware_revision": "25.01", 00:33:18.042 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:18.042 "oacs": { 00:33:18.042 "security": 0, 00:33:18.042 "format": 0, 00:33:18.042 "firmware": 0, 00:33:18.042 "ns_manage": 0 00:33:18.042 }, 00:33:18.042 "multi_ctrlr": true, 00:33:18.042 "ana_reporting": false 00:33:18.042 }, 00:33:18.042 "vs": { 00:33:18.042 "nvme_version": "1.3" 00:33:18.042 }, 00:33:18.042 "ns_data": { 00:33:18.042 "id": 1, 00:33:18.042 "can_share": true 00:33:18.042 } 00:33:18.042 } 00:33:18.042 ], 00:33:18.042 "mp_policy": "active_passive" 00:33:18.042 } 00:33:18.042 } 00:33:18.042 ] 00:33:18.042 08:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2203399 00:33:18.042 08:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:33:18.042 08:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:18.307 Running I/O for 10 seconds... 00:33:19.249 Latency(us) 00:33:19.249 [2024-11-28T07:32:16.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:19.249 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:19.250 Nvme0n1 : 1.00 16891.00 65.98 0.00 0.00 0.00 0.00 0.00 00:33:19.250 [2024-11-28T07:32:16.539Z] =================================================================================================================== 00:33:19.250 [2024-11-28T07:32:16.539Z] Total : 16891.00 65.98 0.00 0.00 0.00 0.00 0.00 00:33:19.250 00:33:20.195 08:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 58194faf-0b7e-4cb3-bb3c-01f691928f90 00:33:20.195 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:20.195 Nvme0n1 : 2.00 17081.50 66.72 0.00 0.00 0.00 0.00 0.00 00:33:20.195 [2024-11-28T07:32:17.484Z] =================================================================================================================== 00:33:20.195 [2024-11-28T07:32:17.484Z] Total : 17081.50 66.72 0.00 0.00 0.00 0.00 0.00 00:33:20.195 00:33:20.195 true 00:33:20.457 08:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58194faf-0b7e-4cb3-bb3c-01f691928f90 00:33:20.457 08:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:33:20.457 08:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:33:20.457 08:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:33:20.457 08:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2203399 00:33:21.399 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:21.399 Nvme0n1 : 3.00 17356.67 67.80 0.00 0.00 0.00 0.00 0.00 00:33:21.399 [2024-11-28T07:32:18.688Z] =================================================================================================================== 00:33:21.399 [2024-11-28T07:32:18.688Z] Total : 17356.67 67.80 0.00 0.00 0.00 0.00 0.00 00:33:21.399 00:33:22.339 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:22.339 Nvme0n1 : 4.00 18002.25 70.32 0.00 0.00 0.00 0.00 0.00 00:33:22.339 [2024-11-28T07:32:19.628Z] =================================================================================================================== 00:33:22.339 [2024-11-28T07:32:19.628Z] Total : 18002.25 70.32 0.00 0.00 0.00 0.00 0.00 00:33:22.339 00:33:23.280 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:23.280 Nvme0n1 : 5.00 19481.80 76.10 0.00 0.00 0.00 0.00 0.00 00:33:23.280 [2024-11-28T07:32:20.569Z] =================================================================================================================== 00:33:23.280 [2024-11-28T07:32:20.569Z] Total : 19481.80 76.10 0.00 0.00 0.00 0.00 0.00 00:33:23.280 00:33:24.222 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:24.222 Nvme0n1 : 6.00 20478.83 80.00 0.00 0.00 0.00 0.00 0.00 00:33:24.222 [2024-11-28T07:32:21.511Z] =================================================================================================================== 00:33:24.222 [2024-11-28T07:32:21.511Z] Total : 20478.83 80.00 0.00 0.00 0.00 0.00 0.00 00:33:24.222 00:33:25.166 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:25.166 Nvme0n1 : 7.00 21197.86 82.80 0.00 0.00 0.00 0.00 0.00 00:33:25.166 [2024-11-28T07:32:22.455Z] =================================================================================================================== 00:33:25.166 [2024-11-28T07:32:22.455Z] Total : 21197.86 82.80 0.00 0.00 0.00 0.00 0.00 00:33:25.166 00:33:26.551 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:26.551 Nvme0n1 : 8.00 21739.00 84.92 0.00 0.00 0.00 0.00 0.00 00:33:26.551 [2024-11-28T07:32:23.840Z] =================================================================================================================== 00:33:26.551 [2024-11-28T07:32:23.840Z] Total : 21739.00 84.92 0.00 0.00 0.00 0.00 0.00 00:33:26.551 00:33:27.493 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:27.493 Nvme0n1 : 9.00 22159.89 86.56 0.00 0.00 0.00 0.00 0.00 00:33:27.493 [2024-11-28T07:32:24.782Z] =================================================================================================================== 00:33:27.493 [2024-11-28T07:32:24.782Z] Total : 22159.89 86.56 0.00 0.00 0.00 0.00 0.00 00:33:27.493 00:33:28.434 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:28.434 Nvme0n1 : 10.00 22496.60 87.88 0.00 0.00 0.00 0.00 0.00 00:33:28.434 [2024-11-28T07:32:25.723Z] =================================================================================================================== 00:33:28.434 [2024-11-28T07:32:25.723Z] Total : 22496.60 87.88 0.00 0.00 0.00 0.00 0.00 00:33:28.434 00:33:28.434 00:33:28.434 Latency(us) 00:33:28.434 [2024-11-28T07:32:25.723Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:28.434 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:28.434 Nvme0n1 : 10.01 22496.35 87.88 0.00 0.00 5686.56 2935.47 31457.28 00:33:28.434 [2024-11-28T07:32:25.723Z] =================================================================================================================== 00:33:28.434 [2024-11-28T07:32:25.723Z] Total : 22496.35 87.88 0.00 0.00 5686.56 2935.47 31457.28 00:33:28.434 { 00:33:28.434 "results": [ 00:33:28.434 { 00:33:28.434 "job": "Nvme0n1", 00:33:28.434 "core_mask": "0x2", 00:33:28.434 "workload": "randwrite", 00:33:28.434 "status": "finished", 00:33:28.434 "queue_depth": 128, 00:33:28.434 "io_size": 4096, 00:33:28.434 "runtime": 10.005801, 00:33:28.434 "iops": 22496.349867441895, 00:33:28.434 "mibps": 87.8763666696949, 00:33:28.434 "io_failed": 0, 00:33:28.434 "io_timeout": 0, 00:33:28.434 "avg_latency_us": 5686.558742806709, 00:33:28.434 "min_latency_us": 2935.4666666666667, 00:33:28.434 "max_latency_us": 31457.28 00:33:28.434 } 00:33:28.434 ], 00:33:28.434 "core_count": 1 00:33:28.434 } 00:33:28.434 08:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2203059 00:33:28.434 08:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2203059 ']' 00:33:28.434 08:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2203059 00:33:28.434 08:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:33:28.434 08:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:28.434 08:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2203059 00:33:28.434 08:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:28.434 08:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:28.434 08:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2203059' 00:33:28.434 killing process with pid 2203059 00:33:28.434 08:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2203059 00:33:28.434 Received shutdown signal, test time was about 10.000000 seconds 00:33:28.434 00:33:28.434 Latency(us) 00:33:28.434 [2024-11-28T07:32:25.723Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:28.434 [2024-11-28T07:32:25.723Z] =================================================================================================================== 00:33:28.434 [2024-11-28T07:32:25.723Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:28.434 08:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2203059 00:33:28.434 08:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:28.694 08:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:28.695 08:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58194faf-0b7e-4cb3-bb3c-01f691928f90 00:33:28.695 08:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:28.955 08:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:28.955 08:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:33:28.955 08:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:29.216 [2024-11-28 08:32:26.264960] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:29.216 08:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58194faf-0b7e-4cb3-bb3c-01f691928f90 00:33:29.216 08:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:33:29.216 08:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58194faf-0b7e-4cb3-bb3c-01f691928f90 00:33:29.216 08:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:29.216 08:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:29.216 08:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:29.216 08:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:29.216 08:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:29.216 08:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:29.216 08:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:29.216 08:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:33:29.216 08:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58194faf-0b7e-4cb3-bb3c-01f691928f90 00:33:29.216 request: 00:33:29.216 { 00:33:29.216 "uuid": "58194faf-0b7e-4cb3-bb3c-01f691928f90", 00:33:29.216 "method": "bdev_lvol_get_lvstores", 00:33:29.216 "req_id": 1 00:33:29.216 } 00:33:29.216 Got JSON-RPC error response 00:33:29.216 response: 00:33:29.216 { 00:33:29.216 "code": -19, 00:33:29.216 "message": "No such device" 00:33:29.216 } 00:33:29.216 08:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:33:29.216 08:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:29.216 08:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:29.216 08:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:29.216 08:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:29.477 aio_bdev 00:33:29.477 08:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev dd736c40-ced7-49d5-a494-4cc4bed31cab 00:33:29.477 08:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=dd736c40-ced7-49d5-a494-4cc4bed31cab 00:33:29.477 08:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:29.477 08:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:33:29.477 08:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:29.477 08:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:29.477 08:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:29.738 08:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b dd736c40-ced7-49d5-a494-4cc4bed31cab -t 2000 00:33:29.738 [ 00:33:29.738 { 00:33:29.738 "name": "dd736c40-ced7-49d5-a494-4cc4bed31cab", 00:33:29.738 "aliases": [ 00:33:29.738 "lvs/lvol" 00:33:29.738 ], 00:33:29.738 "product_name": "Logical Volume", 00:33:29.738 "block_size": 4096, 00:33:29.738 "num_blocks": 38912, 00:33:29.738 "uuid": "dd736c40-ced7-49d5-a494-4cc4bed31cab", 00:33:29.738 "assigned_rate_limits": { 00:33:29.738 "rw_ios_per_sec": 0, 00:33:29.738 "rw_mbytes_per_sec": 0, 00:33:29.738 "r_mbytes_per_sec": 0, 00:33:29.738 "w_mbytes_per_sec": 0 00:33:29.738 }, 00:33:29.738 "claimed": false, 00:33:29.738 "zoned": false, 00:33:29.738 "supported_io_types": { 00:33:29.738 "read": true, 00:33:29.738 "write": true, 00:33:29.738 "unmap": true, 00:33:29.738 "flush": false, 00:33:29.738 "reset": true, 00:33:29.738 "nvme_admin": false, 00:33:29.738 "nvme_io": false, 00:33:29.738 "nvme_io_md": false, 00:33:29.738 "write_zeroes": true, 00:33:29.738 "zcopy": false, 00:33:29.738 "get_zone_info": false, 00:33:29.738 "zone_management": false, 00:33:29.738 "zone_append": false, 00:33:29.738 "compare": false, 00:33:29.738 "compare_and_write": false, 00:33:29.738 "abort": false, 00:33:29.738 "seek_hole": true, 00:33:29.738 "seek_data": true, 00:33:29.738 "copy": false, 00:33:29.738 "nvme_iov_md": false 00:33:29.738 }, 00:33:29.738 "driver_specific": { 00:33:29.738 "lvol": { 00:33:29.738 "lvol_store_uuid": "58194faf-0b7e-4cb3-bb3c-01f691928f90", 00:33:29.738 "base_bdev": "aio_bdev", 00:33:29.738 "thin_provision": false, 00:33:29.738 "num_allocated_clusters": 38, 00:33:29.738 "snapshot": false, 00:33:29.738 "clone": false, 00:33:29.738 "esnap_clone": false 00:33:29.738 } 00:33:29.738 } 00:33:29.738 } 00:33:29.738 ] 00:33:29.738 08:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:33:29.738 08:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58194faf-0b7e-4cb3-bb3c-01f691928f90 00:33:29.738 08:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:29.999 08:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:29.999 08:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 58194faf-0b7e-4cb3-bb3c-01f691928f90 00:33:29.999 08:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:30.259 08:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:30.259 08:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete dd736c40-ced7-49d5-a494-4cc4bed31cab 00:33:30.259 08:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 58194faf-0b7e-4cb3-bb3c-01f691928f90 00:33:30.519 08:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:30.780 08:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:30.780 00:33:30.780 real 0m16.037s 00:33:30.780 user 0m15.774s 00:33:30.780 sys 0m1.422s 00:33:30.780 08:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:30.780 08:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:33:30.780 ************************************ 00:33:30.780 END TEST lvs_grow_clean 00:33:30.780 ************************************ 00:33:30.780 08:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:33:30.780 08:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:30.780 08:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:30.780 08:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:30.780 ************************************ 00:33:30.780 START TEST lvs_grow_dirty 00:33:30.780 ************************************ 00:33:30.780 08:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:33:30.780 08:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:33:30.780 08:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:33:30.780 08:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:33:30.780 08:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:33:30.780 08:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:33:30.780 08:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:33:30.780 08:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:30.780 08:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:30.781 08:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:31.041 08:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:33:31.041 08:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:33:31.305 08:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3c46a839-8945-4503-8d13-354f1f03e63a 00:33:31.305 08:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c46a839-8945-4503-8d13-354f1f03e63a 00:33:31.305 08:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:33:31.305 08:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:33:31.305 08:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:33:31.305 08:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3c46a839-8945-4503-8d13-354f1f03e63a lvol 150 00:33:31.564 08:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b8eb1326-582f-4233-aa5a-5fc067cb16d9 00:33:31.564 08:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:31.564 08:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:33:31.825 [2024-11-28 08:32:28.900894] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:33:31.825 [2024-11-28 08:32:28.901040] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:33:31.825 true 00:33:31.825 08:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c46a839-8945-4503-8d13-354f1f03e63a 00:33:31.825 08:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:33:31.825 08:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:33:31.825 08:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:32.086 08:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b8eb1326-582f-4233-aa5a-5fc067cb16d9 00:33:32.346 08:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:32.346 [2024-11-28 08:32:29.577460] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:32.346 08:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:32.606 08:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2206136 00:33:32.606 08:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:32.606 08:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:33:32.606 08:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2206136 /var/tmp/bdevperf.sock 00:33:32.606 08:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2206136 ']' 00:33:32.606 08:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:32.606 08:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:32.606 08:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:32.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:32.606 08:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:32.606 08:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:32.606 [2024-11-28 08:32:29.799534] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:33:32.606 [2024-11-28 08:32:29.799594] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2206136 ] 00:33:32.606 [2024-11-28 08:32:29.881392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.867 [2024-11-28 08:32:29.911175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:33.439 08:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:33.439 08:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:33:33.439 08:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:33:33.700 Nvme0n1 00:33:33.700 08:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:33:33.960 [ 00:33:33.960 { 00:33:33.960 "name": "Nvme0n1", 00:33:33.960 "aliases": [ 00:33:33.960 "b8eb1326-582f-4233-aa5a-5fc067cb16d9" 00:33:33.960 ], 00:33:33.960 "product_name": "NVMe disk", 00:33:33.960 "block_size": 4096, 00:33:33.960 "num_blocks": 38912, 00:33:33.960 "uuid": "b8eb1326-582f-4233-aa5a-5fc067cb16d9", 00:33:33.960 "numa_id": 0, 00:33:33.960 "assigned_rate_limits": { 00:33:33.960 "rw_ios_per_sec": 0, 00:33:33.960 "rw_mbytes_per_sec": 0, 00:33:33.960 "r_mbytes_per_sec": 0, 00:33:33.960 "w_mbytes_per_sec": 0 00:33:33.960 }, 00:33:33.960 "claimed": false, 00:33:33.960 "zoned": false, 00:33:33.960 "supported_io_types": { 00:33:33.960 "read": true, 00:33:33.960 "write": true, 00:33:33.960 "unmap": true, 00:33:33.960 "flush": true, 00:33:33.960 "reset": true, 00:33:33.960 "nvme_admin": true, 00:33:33.960 "nvme_io": true, 00:33:33.960 "nvme_io_md": false, 00:33:33.960 "write_zeroes": true, 00:33:33.960 "zcopy": false, 00:33:33.960 "get_zone_info": false, 00:33:33.960 "zone_management": false, 00:33:33.960 "zone_append": false, 00:33:33.960 "compare": true, 00:33:33.960 "compare_and_write": true, 00:33:33.960 "abort": true, 00:33:33.960 "seek_hole": false, 00:33:33.960 "seek_data": false, 00:33:33.960 "copy": true, 00:33:33.960 "nvme_iov_md": false 00:33:33.960 }, 00:33:33.960 "memory_domains": [ 00:33:33.960 { 00:33:33.960 "dma_device_id": "system", 00:33:33.960 "dma_device_type": 1 00:33:33.960 } 00:33:33.960 ], 00:33:33.960 "driver_specific": { 00:33:33.960 "nvme": [ 00:33:33.960 { 00:33:33.960 "trid": { 00:33:33.960 "trtype": "TCP", 00:33:33.960 "adrfam": "IPv4", 00:33:33.960 "traddr": "10.0.0.2", 00:33:33.960 "trsvcid": "4420", 00:33:33.960 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:33:33.960 }, 00:33:33.960 "ctrlr_data": { 00:33:33.960 "cntlid": 1, 00:33:33.960 "vendor_id": "0x8086", 00:33:33.960 "model_number": "SPDK bdev Controller", 00:33:33.960 "serial_number": "SPDK0", 00:33:33.960 "firmware_revision": "25.01", 00:33:33.960 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:33.960 "oacs": { 00:33:33.960 "security": 0, 00:33:33.960 "format": 0, 00:33:33.960 "firmware": 0, 00:33:33.960 "ns_manage": 0 00:33:33.960 }, 00:33:33.960 "multi_ctrlr": true, 00:33:33.960 "ana_reporting": false 00:33:33.960 }, 00:33:33.960 "vs": { 00:33:33.960 "nvme_version": "1.3" 00:33:33.960 }, 00:33:33.960 "ns_data": { 00:33:33.960 "id": 1, 00:33:33.960 "can_share": true 00:33:33.960 } 00:33:33.960 } 00:33:33.960 ], 00:33:33.960 "mp_policy": "active_passive" 00:33:33.960 } 00:33:33.960 } 00:33:33.960 ] 00:33:33.960 08:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:33.960 08:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2206360 00:33:33.960 08:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:33:33.960 Running I/O for 10 seconds... 00:33:34.902 Latency(us) 00:33:34.902 [2024-11-28T07:32:32.191Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:34.902 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:34.902 Nvme0n1 : 1.00 17425.00 68.07 0.00 0.00 0.00 0.00 0.00 00:33:34.902 [2024-11-28T07:32:32.191Z] =================================================================================================================== 00:33:34.902 [2024-11-28T07:32:32.191Z] Total : 17425.00 68.07 0.00 0.00 0.00 0.00 0.00 00:33:34.902 00:33:35.845 08:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3c46a839-8945-4503-8d13-354f1f03e63a 00:33:35.845 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:35.845 Nvme0n1 : 2.00 17715.00 69.20 0.00 0.00 0.00 0.00 0.00 00:33:35.845 [2024-11-28T07:32:33.134Z] =================================================================================================================== 00:33:35.845 [2024-11-28T07:32:33.134Z] Total : 17715.00 69.20 0.00 0.00 0.00 0.00 0.00 00:33:35.845 00:33:36.105 true 00:33:36.105 08:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c46a839-8945-4503-8d13-354f1f03e63a 00:33:36.105 08:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:33:36.367 08:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:33:36.367 08:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:33:36.367 08:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2206360 00:33:36.938 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:36.938 Nvme0n1 : 3.00 17810.00 69.57 0.00 0.00 0.00 0.00 0.00 00:33:36.938 [2024-11-28T07:32:34.227Z] =================================================================================================================== 00:33:36.938 [2024-11-28T07:32:34.227Z] Total : 17810.00 69.57 0.00 0.00 0.00 0.00 0.00 00:33:36.938 00:33:37.878 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:37.878 Nvme0n1 : 4.00 17881.75 69.85 0.00 0.00 0.00 0.00 0.00 00:33:37.878 [2024-11-28T07:32:35.167Z] =================================================================================================================== 00:33:37.878 [2024-11-28T07:32:35.167Z] Total : 17881.75 69.85 0.00 0.00 0.00 0.00 0.00 00:33:37.878 00:33:39.263 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:39.263 Nvme0n1 : 5.00 18855.40 73.65 0.00 0.00 0.00 0.00 0.00 00:33:39.263 [2024-11-28T07:32:36.552Z] =================================================================================================================== 00:33:39.263 [2024-11-28T07:32:36.552Z] Total : 18855.40 73.65 0.00 0.00 0.00 0.00 0.00 00:33:39.263 00:33:39.834 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:39.834 Nvme0n1 : 6.00 19956.83 77.96 0.00 0.00 0.00 0.00 0.00 00:33:39.834 [2024-11-28T07:32:37.123Z] =================================================================================================================== 00:33:39.834 [2024-11-28T07:32:37.123Z] Total : 19956.83 77.96 0.00 0.00 0.00 0.00 0.00 00:33:39.834 00:33:41.217 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:41.217 Nvme0n1 : 7.00 20743.57 81.03 0.00 0.00 0.00 0.00 0.00 00:33:41.217 [2024-11-28T07:32:38.506Z] =================================================================================================================== 00:33:41.217 [2024-11-28T07:32:38.506Z] Total : 20743.57 81.03 0.00 0.00 0.00 0.00 0.00 00:33:41.217 00:33:42.159 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:42.159 Nvme0n1 : 8.00 21341.62 83.37 0.00 0.00 0.00 0.00 0.00 00:33:42.159 [2024-11-28T07:32:39.448Z] =================================================================================================================== 00:33:42.159 [2024-11-28T07:32:39.448Z] Total : 21341.62 83.37 0.00 0.00 0.00 0.00 0.00 00:33:42.159 00:33:43.102 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:43.102 Nvme0n1 : 9.00 21792.56 85.13 0.00 0.00 0.00 0.00 0.00 00:33:43.102 [2024-11-28T07:32:40.391Z] =================================================================================================================== 00:33:43.102 [2024-11-28T07:32:40.391Z] Total : 21792.56 85.13 0.00 0.00 0.00 0.00 0.00 00:33:43.102 00:33:44.042 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:44.042 Nvme0n1 : 10.00 22159.90 86.56 0.00 0.00 0.00 0.00 0.00 00:33:44.042 [2024-11-28T07:32:41.331Z] =================================================================================================================== 00:33:44.042 [2024-11-28T07:32:41.331Z] Total : 22159.90 86.56 0.00 0.00 0.00 0.00 0.00 00:33:44.042 00:33:44.042 00:33:44.042 Latency(us) 00:33:44.042 [2024-11-28T07:32:41.331Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:44.042 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:44.042 Nvme0n1 : 10.00 22159.28 86.56 0.00 0.00 5773.45 2908.16 29054.29 00:33:44.042 [2024-11-28T07:32:41.331Z] =================================================================================================================== 00:33:44.042 [2024-11-28T07:32:41.331Z] Total : 22159.28 86.56 0.00 0.00 5773.45 2908.16 29054.29 00:33:44.042 { 00:33:44.042 "results": [ 00:33:44.042 { 00:33:44.042 "job": "Nvme0n1", 00:33:44.042 "core_mask": "0x2", 00:33:44.042 "workload": "randwrite", 00:33:44.042 "status": "finished", 00:33:44.042 "queue_depth": 128, 00:33:44.042 "io_size": 4096, 00:33:44.042 "runtime": 10.003168, 00:33:44.042 "iops": 22159.27994011497, 00:33:44.042 "mibps": 86.5596872660741, 00:33:44.042 "io_failed": 0, 00:33:44.042 "io_timeout": 0, 00:33:44.042 "avg_latency_us": 5773.446889978632, 00:33:44.042 "min_latency_us": 2908.16, 00:33:44.042 "max_latency_us": 29054.293333333335 00:33:44.042 } 00:33:44.042 ], 00:33:44.042 "core_count": 1 00:33:44.042 } 00:33:44.042 08:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2206136 00:33:44.042 08:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2206136 ']' 00:33:44.042 08:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2206136 00:33:44.042 08:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:33:44.042 08:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:44.042 08:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2206136 00:33:44.042 08:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:44.042 08:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:44.042 08:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2206136' 00:33:44.042 killing process with pid 2206136 00:33:44.042 08:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2206136 00:33:44.043 Received shutdown signal, test time was about 10.000000 seconds 00:33:44.043 00:33:44.043 Latency(us) 00:33:44.043 [2024-11-28T07:32:41.332Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:44.043 [2024-11-28T07:32:41.332Z] =================================================================================================================== 00:33:44.043 [2024-11-28T07:32:41.332Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:44.043 08:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2206136 00:33:44.043 08:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:44.305 08:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:44.567 08:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c46a839-8945-4503-8d13-354f1f03e63a 00:33:44.567 08:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:44.829 08:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:44.829 08:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:33:44.829 08:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2202360 00:33:44.829 08:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2202360 00:33:44.829 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2202360 Killed "${NVMF_APP[@]}" "$@" 00:33:44.829 08:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:33:44.829 08:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:33:44.829 08:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:44.829 08:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:44.829 08:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:44.829 08:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2208487 00:33:44.829 08:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2208487 00:33:44.829 08:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:44.829 08:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2208487 ']' 00:33:44.829 08:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:44.829 08:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:44.829 08:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:44.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:44.829 08:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:44.829 08:32:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:44.829 [2024-11-28 08:32:42.038556] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:44.829 [2024-11-28 08:32:42.039561] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:33:44.829 [2024-11-28 08:32:42.039609] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:45.091 [2024-11-28 08:32:42.127620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:45.091 [2024-11-28 08:32:42.157940] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:45.091 [2024-11-28 08:32:42.157969] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:45.091 [2024-11-28 08:32:42.157974] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:45.091 [2024-11-28 08:32:42.157979] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:45.091 [2024-11-28 08:32:42.157984] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:45.091 [2024-11-28 08:32:42.158456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:45.091 [2024-11-28 08:32:42.209379] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:45.091 [2024-11-28 08:32:42.209573] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:45.660 08:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:45.660 08:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:33:45.661 08:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:45.661 08:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:45.661 08:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:45.661 08:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:45.661 08:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:45.921 [2024-11-28 08:32:43.028840] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:33:45.921 [2024-11-28 08:32:43.029082] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:33:45.921 [2024-11-28 08:32:43.029191] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:33:45.921 08:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:33:45.921 08:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b8eb1326-582f-4233-aa5a-5fc067cb16d9 00:33:45.921 08:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b8eb1326-582f-4233-aa5a-5fc067cb16d9 00:33:45.921 08:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:45.921 08:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:33:45.921 08:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:45.921 08:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:45.921 08:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:46.182 08:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b8eb1326-582f-4233-aa5a-5fc067cb16d9 -t 2000 00:33:46.182 [ 00:33:46.182 { 00:33:46.182 "name": "b8eb1326-582f-4233-aa5a-5fc067cb16d9", 00:33:46.182 "aliases": [ 00:33:46.182 "lvs/lvol" 00:33:46.182 ], 00:33:46.182 "product_name": "Logical Volume", 00:33:46.182 "block_size": 4096, 00:33:46.182 "num_blocks": 38912, 00:33:46.182 "uuid": "b8eb1326-582f-4233-aa5a-5fc067cb16d9", 00:33:46.182 "assigned_rate_limits": { 00:33:46.182 "rw_ios_per_sec": 0, 00:33:46.182 "rw_mbytes_per_sec": 0, 00:33:46.182 "r_mbytes_per_sec": 0, 00:33:46.182 "w_mbytes_per_sec": 0 00:33:46.182 }, 00:33:46.182 "claimed": false, 00:33:46.182 "zoned": false, 00:33:46.182 "supported_io_types": { 00:33:46.182 "read": true, 00:33:46.182 "write": true, 00:33:46.182 "unmap": true, 00:33:46.182 "flush": false, 00:33:46.182 "reset": true, 00:33:46.182 "nvme_admin": false, 00:33:46.182 "nvme_io": false, 00:33:46.182 "nvme_io_md": false, 00:33:46.182 "write_zeroes": true, 00:33:46.182 "zcopy": false, 00:33:46.182 "get_zone_info": false, 00:33:46.182 "zone_management": false, 00:33:46.182 "zone_append": false, 00:33:46.182 "compare": false, 00:33:46.182 "compare_and_write": false, 00:33:46.182 "abort": false, 00:33:46.182 "seek_hole": true, 00:33:46.182 "seek_data": true, 00:33:46.182 "copy": false, 00:33:46.182 "nvme_iov_md": false 00:33:46.182 }, 00:33:46.182 "driver_specific": { 00:33:46.182 "lvol": { 00:33:46.182 "lvol_store_uuid": "3c46a839-8945-4503-8d13-354f1f03e63a", 00:33:46.182 "base_bdev": "aio_bdev", 00:33:46.182 "thin_provision": false, 00:33:46.182 "num_allocated_clusters": 38, 00:33:46.182 "snapshot": false, 00:33:46.182 "clone": false, 00:33:46.182 "esnap_clone": false 00:33:46.182 } 00:33:46.182 } 00:33:46.182 } 00:33:46.182 ] 00:33:46.182 08:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:33:46.182 08:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c46a839-8945-4503-8d13-354f1f03e63a 00:33:46.182 08:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:33:46.443 08:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:33:46.443 08:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c46a839-8945-4503-8d13-354f1f03e63a 00:33:46.443 08:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:33:46.703 08:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:33:46.703 08:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:46.703 [2024-11-28 08:32:43.918930] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:46.703 08:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c46a839-8945-4503-8d13-354f1f03e63a 00:33:46.703 08:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:33:46.703 08:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c46a839-8945-4503-8d13-354f1f03e63a 00:33:46.704 08:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:46.704 08:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:46.704 08:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:46.704 08:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:46.704 08:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:46.704 08:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:46.704 08:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:46.704 08:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:33:46.704 08:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c46a839-8945-4503-8d13-354f1f03e63a 00:33:46.964 request: 00:33:46.964 { 00:33:46.964 "uuid": "3c46a839-8945-4503-8d13-354f1f03e63a", 00:33:46.964 "method": "bdev_lvol_get_lvstores", 00:33:46.964 "req_id": 1 00:33:46.964 } 00:33:46.964 Got JSON-RPC error response 00:33:46.964 response: 00:33:46.964 { 00:33:46.964 "code": -19, 00:33:46.964 "message": "No such device" 00:33:46.964 } 00:33:46.964 08:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:33:46.964 08:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:46.964 08:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:46.964 08:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:46.964 08:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:47.225 aio_bdev 00:33:47.225 08:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b8eb1326-582f-4233-aa5a-5fc067cb16d9 00:33:47.225 08:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=b8eb1326-582f-4233-aa5a-5fc067cb16d9 00:33:47.225 08:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:47.225 08:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:33:47.225 08:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:47.225 08:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:47.225 08:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:47.225 08:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b8eb1326-582f-4233-aa5a-5fc067cb16d9 -t 2000 00:33:47.486 [ 00:33:47.486 { 00:33:47.486 "name": "b8eb1326-582f-4233-aa5a-5fc067cb16d9", 00:33:47.486 "aliases": [ 00:33:47.486 "lvs/lvol" 00:33:47.486 ], 00:33:47.486 "product_name": "Logical Volume", 00:33:47.486 "block_size": 4096, 00:33:47.486 "num_blocks": 38912, 00:33:47.486 "uuid": "b8eb1326-582f-4233-aa5a-5fc067cb16d9", 00:33:47.486 "assigned_rate_limits": { 00:33:47.486 "rw_ios_per_sec": 0, 00:33:47.486 "rw_mbytes_per_sec": 0, 00:33:47.486 "r_mbytes_per_sec": 0, 00:33:47.486 "w_mbytes_per_sec": 0 00:33:47.486 }, 00:33:47.486 "claimed": false, 00:33:47.486 "zoned": false, 00:33:47.486 "supported_io_types": { 00:33:47.486 "read": true, 00:33:47.486 "write": true, 00:33:47.486 "unmap": true, 00:33:47.486 "flush": false, 00:33:47.486 "reset": true, 00:33:47.486 "nvme_admin": false, 00:33:47.486 "nvme_io": false, 00:33:47.486 "nvme_io_md": false, 00:33:47.486 "write_zeroes": true, 00:33:47.486 "zcopy": false, 00:33:47.487 "get_zone_info": false, 00:33:47.487 "zone_management": false, 00:33:47.487 "zone_append": false, 00:33:47.487 "compare": false, 00:33:47.487 "compare_and_write": false, 00:33:47.487 "abort": false, 00:33:47.487 "seek_hole": true, 00:33:47.487 "seek_data": true, 00:33:47.487 "copy": false, 00:33:47.487 "nvme_iov_md": false 00:33:47.487 }, 00:33:47.487 "driver_specific": { 00:33:47.487 "lvol": { 00:33:47.487 "lvol_store_uuid": "3c46a839-8945-4503-8d13-354f1f03e63a", 00:33:47.487 "base_bdev": "aio_bdev", 00:33:47.487 "thin_provision": false, 00:33:47.487 "num_allocated_clusters": 38, 00:33:47.487 "snapshot": false, 00:33:47.487 "clone": false, 00:33:47.487 "esnap_clone": false 00:33:47.487 } 00:33:47.487 } 00:33:47.487 } 00:33:47.487 ] 00:33:47.487 08:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:33:47.487 08:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c46a839-8945-4503-8d13-354f1f03e63a 00:33:47.487 08:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:47.748 08:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:47.748 08:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3c46a839-8945-4503-8d13-354f1f03e63a 00:33:47.748 08:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:47.748 08:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:47.748 08:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b8eb1326-582f-4233-aa5a-5fc067cb16d9 00:33:48.007 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3c46a839-8945-4503-8d13-354f1f03e63a 00:33:48.267 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:48.267 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:48.267 00:33:48.267 real 0m17.526s 00:33:48.267 user 0m35.547s 00:33:48.267 sys 0m2.952s 00:33:48.267 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:48.267 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:48.267 ************************************ 00:33:48.267 END TEST lvs_grow_dirty 00:33:48.267 ************************************ 00:33:48.527 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:33:48.527 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:33:48.527 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:33:48.527 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:33:48.528 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:33:48.528 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:33:48.528 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:33:48.528 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:33:48.528 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:33:48.528 nvmf_trace.0 00:33:48.528 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:33:48.528 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:33:48.528 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:48.528 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:33:48.528 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:48.528 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:33:48.528 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:48.528 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:48.528 rmmod nvme_tcp 00:33:48.528 rmmod nvme_fabrics 00:33:48.528 rmmod nvme_keyring 00:33:48.528 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:48.528 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:33:48.528 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:33:48.528 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2208487 ']' 00:33:48.528 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2208487 00:33:48.528 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2208487 ']' 00:33:48.528 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2208487 00:33:48.528 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:33:48.528 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:48.528 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2208487 00:33:48.528 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:48.528 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:48.528 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2208487' 00:33:48.528 killing process with pid 2208487 00:33:48.528 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2208487 00:33:48.528 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2208487 00:33:48.788 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:48.788 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:48.788 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:48.788 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:33:48.788 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:33:48.788 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:48.788 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:33:48.788 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:48.788 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:48.788 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:48.788 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:48.789 08:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:50.700 08:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:50.960 00:33:50.960 real 0m45.080s 00:33:50.960 user 0m54.367s 00:33:50.960 sys 0m10.581s 00:33:50.960 08:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:50.960 08:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:50.960 ************************************ 00:33:50.960 END TEST nvmf_lvs_grow 00:33:50.960 ************************************ 00:33:50.960 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:50.960 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:50.960 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:50.960 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:50.960 ************************************ 00:33:50.960 START TEST nvmf_bdev_io_wait 00:33:50.960 ************************************ 00:33:50.960 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:50.960 * Looking for test storage... 00:33:50.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:50.960 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:50.960 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:33:50.960 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:51.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.223 --rc genhtml_branch_coverage=1 00:33:51.223 --rc genhtml_function_coverage=1 00:33:51.223 --rc genhtml_legend=1 00:33:51.223 --rc geninfo_all_blocks=1 00:33:51.223 --rc geninfo_unexecuted_blocks=1 00:33:51.223 00:33:51.223 ' 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:51.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.223 --rc genhtml_branch_coverage=1 00:33:51.223 --rc genhtml_function_coverage=1 00:33:51.223 --rc genhtml_legend=1 00:33:51.223 --rc geninfo_all_blocks=1 00:33:51.223 --rc geninfo_unexecuted_blocks=1 00:33:51.223 00:33:51.223 ' 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:51.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.223 --rc genhtml_branch_coverage=1 00:33:51.223 --rc genhtml_function_coverage=1 00:33:51.223 --rc genhtml_legend=1 00:33:51.223 --rc geninfo_all_blocks=1 00:33:51.223 --rc geninfo_unexecuted_blocks=1 00:33:51.223 00:33:51.223 ' 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:51.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.223 --rc genhtml_branch_coverage=1 00:33:51.223 --rc genhtml_function_coverage=1 00:33:51.223 --rc genhtml_legend=1 00:33:51.223 --rc geninfo_all_blocks=1 00:33:51.223 --rc geninfo_unexecuted_blocks=1 00:33:51.223 00:33:51.223 ' 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:51.223 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:51.224 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.224 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.224 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.224 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:33:51.224 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.224 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:33:51.224 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:51.224 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:51.224 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:51.224 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:51.224 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:51.224 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:51.224 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:51.224 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:51.224 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:51.224 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:51.224 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:51.224 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:51.224 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:33:51.224 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:51.224 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:51.224 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:51.224 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:51.224 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:51.224 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:51.224 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:51.224 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:51.224 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:51.224 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:51.224 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:33:51.224 08:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:59.371 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:59.371 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:59.371 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:59.371 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:59.372 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:59.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:59.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.673 ms 00:33:59.372 00:33:59.372 --- 10.0.0.2 ping statistics --- 00:33:59.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:59.372 rtt min/avg/max/mdev = 0.673/0.673/0.673/0.000 ms 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:59.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:59.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:33:59.372 00:33:59.372 --- 10.0.0.1 ping statistics --- 00:33:59.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:59.372 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2213224 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2213224 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2213224 ']' 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:59.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:59.372 08:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:59.372 [2024-11-28 08:32:55.920436] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:59.372 [2024-11-28 08:32:55.921609] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:33:59.372 [2024-11-28 08:32:55.921665] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:59.372 [2024-11-28 08:32:56.024827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:59.372 [2024-11-28 08:32:56.080326] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:59.372 [2024-11-28 08:32:56.080384] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:59.372 [2024-11-28 08:32:56.080393] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:59.372 [2024-11-28 08:32:56.080400] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:59.372 [2024-11-28 08:32:56.080407] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:59.372 [2024-11-28 08:32:56.082534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:59.372 [2024-11-28 08:32:56.082699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:59.372 [2024-11-28 08:32:56.082863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:59.372 [2024-11-28 08:32:56.082864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:59.372 [2024-11-28 08:32:56.083209] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:59.633 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:59.633 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:33:59.633 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:59.633 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:59.633 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:59.633 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:59.634 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:33:59.634 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.634 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:59.634 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.634 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:33:59.634 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.634 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:59.634 [2024-11-28 08:32:56.863901] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:59.634 [2024-11-28 08:32:56.864324] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:59.634 [2024-11-28 08:32:56.864523] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:59.634 [2024-11-28 08:32:56.864673] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:59.634 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.634 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:59.634 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.634 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:59.634 [2024-11-28 08:32:56.875712] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:59.634 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.634 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:59.634 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.634 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:59.634 Malloc0 00:33:59.634 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.634 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:59.634 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.634 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:59.896 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.896 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:59.896 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:59.897 [2024-11-28 08:32:56.947950] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2213572 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2213574 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:59.897 { 00:33:59.897 "params": { 00:33:59.897 "name": "Nvme$subsystem", 00:33:59.897 "trtype": "$TEST_TRANSPORT", 00:33:59.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:59.897 "adrfam": "ipv4", 00:33:59.897 "trsvcid": "$NVMF_PORT", 00:33:59.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:59.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:59.897 "hdgst": ${hdgst:-false}, 00:33:59.897 "ddgst": ${ddgst:-false} 00:33:59.897 }, 00:33:59.897 "method": "bdev_nvme_attach_controller" 00:33:59.897 } 00:33:59.897 EOF 00:33:59.897 )") 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2213576 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:59.897 { 00:33:59.897 "params": { 00:33:59.897 "name": "Nvme$subsystem", 00:33:59.897 "trtype": "$TEST_TRANSPORT", 00:33:59.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:59.897 "adrfam": "ipv4", 00:33:59.897 "trsvcid": "$NVMF_PORT", 00:33:59.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:59.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:59.897 "hdgst": ${hdgst:-false}, 00:33:59.897 "ddgst": ${ddgst:-false} 00:33:59.897 }, 00:33:59.897 "method": "bdev_nvme_attach_controller" 00:33:59.897 } 00:33:59.897 EOF 00:33:59.897 )") 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2213579 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:59.897 { 00:33:59.897 "params": { 00:33:59.897 "name": "Nvme$subsystem", 00:33:59.897 "trtype": "$TEST_TRANSPORT", 00:33:59.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:59.897 "adrfam": "ipv4", 00:33:59.897 "trsvcid": "$NVMF_PORT", 00:33:59.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:59.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:59.897 "hdgst": ${hdgst:-false}, 00:33:59.897 "ddgst": ${ddgst:-false} 00:33:59.897 }, 00:33:59.897 "method": "bdev_nvme_attach_controller" 00:33:59.897 } 00:33:59.897 EOF 00:33:59.897 )") 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:59.897 { 00:33:59.897 "params": { 00:33:59.897 "name": "Nvme$subsystem", 00:33:59.897 "trtype": "$TEST_TRANSPORT", 00:33:59.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:59.897 "adrfam": "ipv4", 00:33:59.897 "trsvcid": "$NVMF_PORT", 00:33:59.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:59.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:59.897 "hdgst": ${hdgst:-false}, 00:33:59.897 "ddgst": ${ddgst:-false} 00:33:59.897 }, 00:33:59.897 "method": "bdev_nvme_attach_controller" 00:33:59.897 } 00:33:59.897 EOF 00:33:59.897 )") 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2213572 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:59.897 "params": { 00:33:59.897 "name": "Nvme1", 00:33:59.897 "trtype": "tcp", 00:33:59.897 "traddr": "10.0.0.2", 00:33:59.897 "adrfam": "ipv4", 00:33:59.897 "trsvcid": "4420", 00:33:59.897 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:59.897 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:59.897 "hdgst": false, 00:33:59.897 "ddgst": false 00:33:59.897 }, 00:33:59.897 "method": "bdev_nvme_attach_controller" 00:33:59.897 }' 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:59.897 "params": { 00:33:59.897 "name": "Nvme1", 00:33:59.897 "trtype": "tcp", 00:33:59.897 "traddr": "10.0.0.2", 00:33:59.897 "adrfam": "ipv4", 00:33:59.897 "trsvcid": "4420", 00:33:59.897 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:59.897 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:59.897 "hdgst": false, 00:33:59.897 "ddgst": false 00:33:59.897 }, 00:33:59.897 "method": "bdev_nvme_attach_controller" 00:33:59.897 }' 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:59.897 "params": { 00:33:59.897 "name": "Nvme1", 00:33:59.897 "trtype": "tcp", 00:33:59.897 "traddr": "10.0.0.2", 00:33:59.897 "adrfam": "ipv4", 00:33:59.897 "trsvcid": "4420", 00:33:59.897 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:59.897 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:59.897 "hdgst": false, 00:33:59.897 "ddgst": false 00:33:59.897 }, 00:33:59.897 "method": "bdev_nvme_attach_controller" 00:33:59.897 }' 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:59.897 08:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:59.897 "params": { 00:33:59.897 "name": "Nvme1", 00:33:59.897 "trtype": "tcp", 00:33:59.897 "traddr": "10.0.0.2", 00:33:59.897 "adrfam": "ipv4", 00:33:59.897 "trsvcid": "4420", 00:33:59.897 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:59.897 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:59.898 "hdgst": false, 00:33:59.898 "ddgst": false 00:33:59.898 }, 00:33:59.898 "method": "bdev_nvme_attach_controller" 00:33:59.898 }' 00:33:59.898 [2024-11-28 08:32:57.005889] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:33:59.898 [2024-11-28 08:32:57.005963] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:33:59.898 [2024-11-28 08:32:57.007211] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:33:59.898 [2024-11-28 08:32:57.007277] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:33:59.898 [2024-11-28 08:32:57.007667] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:33:59.898 [2024-11-28 08:32:57.007723] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:33:59.898 [2024-11-28 08:32:57.022230] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:33:59.898 [2024-11-28 08:32:57.022297] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:34:00.159 [2024-11-28 08:32:57.213851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:00.159 [2024-11-28 08:32:57.253823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:00.159 [2024-11-28 08:32:57.303966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:00.159 [2024-11-28 08:32:57.342604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:00.159 [2024-11-28 08:32:57.394255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:00.159 [2024-11-28 08:32:57.435962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:00.510 [2024-11-28 08:32:57.464268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:00.510 [2024-11-28 08:32:57.503865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:34:00.510 Running I/O for 1 seconds... 00:34:00.510 Running I/O for 1 seconds... 00:34:00.510 Running I/O for 1 seconds... 00:34:00.510 Running I/O for 1 seconds... 00:34:01.470 10011.00 IOPS, 39.11 MiB/s 00:34:01.470 Latency(us) 00:34:01.470 [2024-11-28T07:32:58.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:01.470 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:34:01.470 Nvme1n1 : 1.01 10067.73 39.33 0.00 0.00 12661.60 2102.61 15291.73 00:34:01.470 [2024-11-28T07:32:58.759Z] =================================================================================================================== 00:34:01.470 [2024-11-28T07:32:58.759Z] Total : 10067.73 39.33 0.00 0.00 12661.60 2102.61 15291.73 00:34:01.470 10195.00 IOPS, 39.82 MiB/s 00:34:01.470 Latency(us) 00:34:01.470 [2024-11-28T07:32:58.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:01.470 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:34:01.470 Nvme1n1 : 1.01 10270.64 40.12 0.00 0.00 12422.23 2334.72 16056.32 00:34:01.470 [2024-11-28T07:32:58.759Z] =================================================================================================================== 00:34:01.470 [2024-11-28T07:32:58.759Z] Total : 10270.64 40.12 0.00 0.00 12422.23 2334.72 16056.32 00:34:01.470 08:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2213574 00:34:01.470 9980.00 IOPS, 38.98 MiB/s 00:34:01.470 Latency(us) 00:34:01.470 [2024-11-28T07:32:58.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:01.470 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:34:01.470 Nvme1n1 : 1.01 10043.90 39.23 0.00 0.00 12701.50 4833.28 19988.48 00:34:01.470 [2024-11-28T07:32:58.759Z] =================================================================================================================== 00:34:01.470 [2024-11-28T07:32:58.759Z] Total : 10043.90 39.23 0.00 0.00 12701.50 4833.28 19988.48 00:34:01.732 181408.00 IOPS, 708.62 MiB/s 00:34:01.733 Latency(us) 00:34:01.733 [2024-11-28T07:32:59.022Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:01.733 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:34:01.733 Nvme1n1 : 1.00 181047.21 707.22 0.00 0.00 703.00 300.37 1966.08 00:34:01.733 [2024-11-28T07:32:59.022Z] =================================================================================================================== 00:34:01.733 [2024-11-28T07:32:59.022Z] Total : 181047.21 707.22 0.00 0.00 703.00 300.37 1966.08 00:34:01.733 08:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2213576 00:34:01.733 08:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2213579 00:34:01.733 08:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:01.733 08:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.733 08:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:01.733 08:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.733 08:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:34:01.733 08:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:34:01.733 08:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:01.733 08:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:34:01.733 08:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:01.733 08:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:34:01.733 08:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:01.733 08:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:01.733 rmmod nvme_tcp 00:34:01.733 rmmod nvme_fabrics 00:34:01.733 rmmod nvme_keyring 00:34:01.733 08:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:01.733 08:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:34:01.733 08:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:34:01.733 08:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2213224 ']' 00:34:01.733 08:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2213224 00:34:01.733 08:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2213224 ']' 00:34:01.733 08:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2213224 00:34:01.733 08:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:34:01.733 08:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:01.733 08:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2213224 00:34:01.994 08:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:01.994 08:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:01.994 08:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2213224' 00:34:01.994 killing process with pid 2213224 00:34:01.994 08:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2213224 00:34:01.994 08:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2213224 00:34:01.994 08:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:01.994 08:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:01.994 08:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:01.994 08:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:34:01.994 08:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:34:01.994 08:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:01.994 08:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:34:01.994 08:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:01.994 08:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:01.994 08:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:01.994 08:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:01.994 08:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:04.544 00:34:04.544 real 0m13.193s 00:34:04.544 user 0m15.872s 00:34:04.544 sys 0m7.919s 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:04.544 ************************************ 00:34:04.544 END TEST nvmf_bdev_io_wait 00:34:04.544 ************************************ 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:04.544 ************************************ 00:34:04.544 START TEST nvmf_queue_depth 00:34:04.544 ************************************ 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:34:04.544 * Looking for test storage... 00:34:04.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:04.544 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:04.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.544 --rc genhtml_branch_coverage=1 00:34:04.544 --rc genhtml_function_coverage=1 00:34:04.545 --rc genhtml_legend=1 00:34:04.545 --rc geninfo_all_blocks=1 00:34:04.545 --rc geninfo_unexecuted_blocks=1 00:34:04.545 00:34:04.545 ' 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:04.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.545 --rc genhtml_branch_coverage=1 00:34:04.545 --rc genhtml_function_coverage=1 00:34:04.545 --rc genhtml_legend=1 00:34:04.545 --rc geninfo_all_blocks=1 00:34:04.545 --rc geninfo_unexecuted_blocks=1 00:34:04.545 00:34:04.545 ' 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:04.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.545 --rc genhtml_branch_coverage=1 00:34:04.545 --rc genhtml_function_coverage=1 00:34:04.545 --rc genhtml_legend=1 00:34:04.545 --rc geninfo_all_blocks=1 00:34:04.545 --rc geninfo_unexecuted_blocks=1 00:34:04.545 00:34:04.545 ' 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:04.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:04.545 --rc genhtml_branch_coverage=1 00:34:04.545 --rc genhtml_function_coverage=1 00:34:04.545 --rc genhtml_legend=1 00:34:04.545 --rc geninfo_all_blocks=1 00:34:04.545 --rc geninfo_unexecuted_blocks=1 00:34:04.545 00:34:04.545 ' 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:04.545 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:04.546 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:04.546 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:04.546 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:04.546 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:04.546 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:04.546 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:04.546 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:34:04.546 08:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:12.693 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:12.693 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:12.693 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:12.693 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:12.693 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:12.694 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:12.694 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:12.694 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:12.694 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:12.694 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:12.694 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:12.694 08:33:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:12.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:12.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:34:12.694 00:34:12.694 --- 10.0.0.2 ping statistics --- 00:34:12.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:12.694 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:12.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:12.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:34:12.694 00:34:12.694 --- 10.0.0.1 ping statistics --- 00:34:12.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:12.694 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2218252 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2218252 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2218252 ']' 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:12.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:12.694 [2024-11-28 08:33:09.161970] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:12.694 [2024-11-28 08:33:09.163102] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:34:12.694 [2024-11-28 08:33:09.163154] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:12.694 [2024-11-28 08:33:09.240605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:12.694 [2024-11-28 08:33:09.287371] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:12.694 [2024-11-28 08:33:09.287416] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:12.694 [2024-11-28 08:33:09.287423] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:12.694 [2024-11-28 08:33:09.287428] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:12.694 [2024-11-28 08:33:09.287433] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:12.694 [2024-11-28 08:33:09.288081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:12.694 [2024-11-28 08:33:09.360885] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:12.694 [2024-11-28 08:33:09.361108] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:12.694 [2024-11-28 08:33:09.452880] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:12.694 Malloc0 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:12.694 [2024-11-28 08:33:09.541096] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2218574 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2218574 /var/tmp/bdevperf.sock 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2218574 ']' 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:12.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:12.694 08:33:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:12.695 [2024-11-28 08:33:09.607256] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:34:12.695 [2024-11-28 08:33:09.607320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2218574 ] 00:34:12.695 [2024-11-28 08:33:09.699511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:12.695 [2024-11-28 08:33:09.752113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:13.267 08:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:13.267 08:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:34:13.267 08:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:13.267 08:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.267 08:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:13.267 NVMe0n1 00:34:13.267 08:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.267 08:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:13.528 Running I/O for 10 seconds... 00:34:15.420 8747.00 IOPS, 34.17 MiB/s [2024-11-28T07:33:13.653Z] 8941.50 IOPS, 34.93 MiB/s [2024-11-28T07:33:15.039Z] 9510.33 IOPS, 37.15 MiB/s [2024-11-28T07:33:15.978Z] 10239.00 IOPS, 40.00 MiB/s [2024-11-28T07:33:16.921Z] 10858.40 IOPS, 42.42 MiB/s [2024-11-28T07:33:17.863Z] 11320.83 IOPS, 44.22 MiB/s [2024-11-28T07:33:18.804Z] 11631.57 IOPS, 45.44 MiB/s [2024-11-28T07:33:19.746Z] 11883.62 IOPS, 46.42 MiB/s [2024-11-28T07:33:20.690Z] 12061.33 IOPS, 47.11 MiB/s [2024-11-28T07:33:20.690Z] 12207.90 IOPS, 47.69 MiB/s 00:34:23.401 Latency(us) 00:34:23.401 [2024-11-28T07:33:20.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:23.401 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:34:23.401 Verification LBA range: start 0x0 length 0x4000 00:34:23.401 NVMe0n1 : 10.04 12245.90 47.84 0.00 0.00 83332.52 7591.25 72526.51 00:34:23.401 [2024-11-28T07:33:20.690Z] =================================================================================================================== 00:34:23.401 [2024-11-28T07:33:20.690Z] Total : 12245.90 47.84 0.00 0.00 83332.52 7591.25 72526.51 00:34:23.401 { 00:34:23.401 "results": [ 00:34:23.401 { 00:34:23.401 "job": "NVMe0n1", 00:34:23.401 "core_mask": "0x1", 00:34:23.401 "workload": "verify", 00:34:23.401 "status": "finished", 00:34:23.401 "verify_range": { 00:34:23.401 "start": 0, 00:34:23.401 "length": 16384 00:34:23.401 }, 00:34:23.401 "queue_depth": 1024, 00:34:23.401 "io_size": 4096, 00:34:23.401 "runtime": 10.042789, 00:34:23.401 "iops": 12245.901014150551, 00:34:23.401 "mibps": 47.83555083652559, 00:34:23.401 "io_failed": 0, 00:34:23.401 "io_timeout": 0, 00:34:23.401 "avg_latency_us": 83332.51511813285, 00:34:23.401 "min_latency_us": 7591.253333333333, 00:34:23.401 "max_latency_us": 72526.50666666667 00:34:23.401 } 00:34:23.401 ], 00:34:23.401 "core_count": 1 00:34:23.401 } 00:34:23.664 08:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2218574 00:34:23.664 08:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2218574 ']' 00:34:23.664 08:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2218574 00:34:23.664 08:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:34:23.664 08:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:23.664 08:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2218574 00:34:23.664 08:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:23.664 08:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:23.664 08:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2218574' 00:34:23.664 killing process with pid 2218574 00:34:23.664 08:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2218574 00:34:23.664 Received shutdown signal, test time was about 10.000000 seconds 00:34:23.664 00:34:23.664 Latency(us) 00:34:23.664 [2024-11-28T07:33:20.953Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:23.664 [2024-11-28T07:33:20.953Z] =================================================================================================================== 00:34:23.664 [2024-11-28T07:33:20.953Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:23.664 08:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2218574 00:34:23.664 08:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:34:23.664 08:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:34:23.664 08:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:23.664 08:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:34:23.664 08:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:23.664 08:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:34:23.664 08:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:23.664 08:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:23.664 rmmod nvme_tcp 00:34:23.664 rmmod nvme_fabrics 00:34:23.664 rmmod nvme_keyring 00:34:23.664 08:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:23.924 08:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:34:23.924 08:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:34:23.924 08:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2218252 ']' 00:34:23.924 08:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2218252 00:34:23.924 08:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2218252 ']' 00:34:23.924 08:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2218252 00:34:23.924 08:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:34:23.924 08:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:23.924 08:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2218252 00:34:23.924 08:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:23.924 08:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:23.924 08:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2218252' 00:34:23.924 killing process with pid 2218252 00:34:23.924 08:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2218252 00:34:23.924 08:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2218252 00:34:23.924 08:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:23.924 08:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:23.924 08:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:23.924 08:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:34:23.924 08:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:34:23.924 08:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:23.924 08:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:34:23.924 08:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:23.924 08:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:23.924 08:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:23.924 08:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:23.924 08:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:26.474 00:34:26.474 real 0m21.862s 00:34:26.474 user 0m24.566s 00:34:26.474 sys 0m7.293s 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:26.474 ************************************ 00:34:26.474 END TEST nvmf_queue_depth 00:34:26.474 ************************************ 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:26.474 ************************************ 00:34:26.474 START TEST nvmf_target_multipath 00:34:26.474 ************************************ 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:34:26.474 * Looking for test storage... 00:34:26.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:26.474 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:26.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:26.475 --rc genhtml_branch_coverage=1 00:34:26.475 --rc genhtml_function_coverage=1 00:34:26.475 --rc genhtml_legend=1 00:34:26.475 --rc geninfo_all_blocks=1 00:34:26.475 --rc geninfo_unexecuted_blocks=1 00:34:26.475 00:34:26.475 ' 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:26.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:26.475 --rc genhtml_branch_coverage=1 00:34:26.475 --rc genhtml_function_coverage=1 00:34:26.475 --rc genhtml_legend=1 00:34:26.475 --rc geninfo_all_blocks=1 00:34:26.475 --rc geninfo_unexecuted_blocks=1 00:34:26.475 00:34:26.475 ' 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:26.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:26.475 --rc genhtml_branch_coverage=1 00:34:26.475 --rc genhtml_function_coverage=1 00:34:26.475 --rc genhtml_legend=1 00:34:26.475 --rc geninfo_all_blocks=1 00:34:26.475 --rc geninfo_unexecuted_blocks=1 00:34:26.475 00:34:26.475 ' 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:26.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:26.475 --rc genhtml_branch_coverage=1 00:34:26.475 --rc genhtml_function_coverage=1 00:34:26.475 --rc genhtml_legend=1 00:34:26.475 --rc geninfo_all_blocks=1 00:34:26.475 --rc geninfo_unexecuted_blocks=1 00:34:26.475 00:34:26.475 ' 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:34:26.475 08:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:34.626 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:34.626 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:34:34.626 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:34.626 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:34.626 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:34.626 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:34.626 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:34.626 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:34:34.626 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:34.626 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:34:34.626 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:34:34.626 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:34:34.626 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:34:34.626 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:34:34.626 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:34:34.626 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:34.626 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:34.626 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:34.626 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:34.626 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:34.626 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:34.626 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:34.626 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:34.626 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:34.626 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:34.626 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:34.626 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:34.627 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:34.627 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:34.627 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:34.627 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:34.627 08:33:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:34.627 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:34.627 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:34.627 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:34.627 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:34.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:34.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:34:34.627 00:34:34.627 --- 10.0.0.2 ping statistics --- 00:34:34.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:34.628 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:34:34.628 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:34.628 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:34.628 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:34:34.628 00:34:34.628 --- 10.0.0.1 ping statistics --- 00:34:34.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:34.628 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:34:34.628 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:34.628 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:34:34.628 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:34.628 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:34.628 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:34.628 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:34.628 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:34.628 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:34.628 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:34.628 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:34:34.628 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:34:34.628 only one NIC for nvmf test 00:34:34.628 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:34:34.628 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:34.628 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:34:34.628 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:34.628 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:34:34.628 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:34.628 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:34.628 rmmod nvme_tcp 00:34:34.628 rmmod nvme_fabrics 00:34:34.628 rmmod nvme_keyring 00:34:34.628 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:34.628 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:34:34.628 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:34:34.628 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:34.628 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:34.628 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:34.628 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:34.628 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:34:34.628 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:34:34.628 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:34.628 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:34:34.628 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:34.628 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:34.628 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:34.628 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:34.628 08:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:36.018 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:36.018 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:34:36.018 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:34:36.018 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:36.018 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:34:36.018 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:36.018 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:34:36.018 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:36.018 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:36.018 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:36.018 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:34:36.018 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:34:36.018 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:36.018 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:36.018 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:36.018 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:36.018 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:34:36.018 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:34:36.018 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:36.018 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:34:36.018 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:36.018 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:36.018 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:36.018 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:36.018 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:36.018 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:36.280 00:34:36.280 real 0m10.011s 00:34:36.280 user 0m2.202s 00:34:36.280 sys 0m5.778s 00:34:36.280 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:36.280 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:36.280 ************************************ 00:34:36.280 END TEST nvmf_target_multipath 00:34:36.280 ************************************ 00:34:36.280 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:34:36.280 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:36.280 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:36.280 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:36.280 ************************************ 00:34:36.280 START TEST nvmf_zcopy 00:34:36.280 ************************************ 00:34:36.280 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:34:36.280 * Looking for test storage... 00:34:36.280 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:36.280 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:36.280 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:34:36.280 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:36.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:36.542 --rc genhtml_branch_coverage=1 00:34:36.542 --rc genhtml_function_coverage=1 00:34:36.542 --rc genhtml_legend=1 00:34:36.542 --rc geninfo_all_blocks=1 00:34:36.542 --rc geninfo_unexecuted_blocks=1 00:34:36.542 00:34:36.542 ' 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:36.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:36.542 --rc genhtml_branch_coverage=1 00:34:36.542 --rc genhtml_function_coverage=1 00:34:36.542 --rc genhtml_legend=1 00:34:36.542 --rc geninfo_all_blocks=1 00:34:36.542 --rc geninfo_unexecuted_blocks=1 00:34:36.542 00:34:36.542 ' 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:36.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:36.542 --rc genhtml_branch_coverage=1 00:34:36.542 --rc genhtml_function_coverage=1 00:34:36.542 --rc genhtml_legend=1 00:34:36.542 --rc geninfo_all_blocks=1 00:34:36.542 --rc geninfo_unexecuted_blocks=1 00:34:36.542 00:34:36.542 ' 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:36.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:36.542 --rc genhtml_branch_coverage=1 00:34:36.542 --rc genhtml_function_coverage=1 00:34:36.542 --rc genhtml_legend=1 00:34:36.542 --rc geninfo_all_blocks=1 00:34:36.542 --rc geninfo_unexecuted_blocks=1 00:34:36.542 00:34:36.542 ' 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.542 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:34:36.543 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.543 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:34:36.543 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:36.543 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:36.543 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:36.543 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:36.543 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:36.543 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:36.543 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:36.543 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:36.543 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:36.543 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:36.543 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:34:36.543 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:36.543 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:36.543 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:36.543 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:36.543 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:36.543 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:36.543 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:36.543 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:36.543 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:36.543 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:36.543 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:34:36.543 08:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:44.693 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:44.693 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:34:44.693 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:44.693 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:44.693 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:44.693 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:44.693 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:44.693 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:34:44.693 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:44.693 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:44.694 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:44.694 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:44.694 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:44.694 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:44.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:44.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.524 ms 00:34:44.694 00:34:44.694 --- 10.0.0.2 ping statistics --- 00:34:44.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:44.694 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:44.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:44.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:34:44.694 00:34:44.694 --- 10.0.0.1 ping statistics --- 00:34:44.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:44.694 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:44.694 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:44.695 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:44.695 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:44.695 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:44.695 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:44.695 08:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2229191 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2229191 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2229191 ']' 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:44.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:44.695 [2024-11-28 08:33:41.066515] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:44.695 [2024-11-28 08:33:41.067482] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:34:44.695 [2024-11-28 08:33:41.067517] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:44.695 [2024-11-28 08:33:41.160824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:44.695 [2024-11-28 08:33:41.195726] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:44.695 [2024-11-28 08:33:41.195760] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:44.695 [2024-11-28 08:33:41.195768] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:44.695 [2024-11-28 08:33:41.195774] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:44.695 [2024-11-28 08:33:41.195780] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:44.695 [2024-11-28 08:33:41.196331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:44.695 [2024-11-28 08:33:41.252085] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:44.695 [2024-11-28 08:33:41.252359] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:44.695 [2024-11-28 08:33:41.933096] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:44.695 [2024-11-28 08:33:41.961340] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.695 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:44.956 malloc0 00:34:44.956 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.956 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:34:44.956 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.956 08:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:44.956 08:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.956 08:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:34:44.957 08:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:34:44.957 08:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:34:44.957 08:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:34:44.957 08:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:44.957 08:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:44.957 { 00:34:44.957 "params": { 00:34:44.957 "name": "Nvme$subsystem", 00:34:44.957 "trtype": "$TEST_TRANSPORT", 00:34:44.957 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:44.957 "adrfam": "ipv4", 00:34:44.957 "trsvcid": "$NVMF_PORT", 00:34:44.957 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:44.957 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:44.957 "hdgst": ${hdgst:-false}, 00:34:44.957 "ddgst": ${ddgst:-false} 00:34:44.957 }, 00:34:44.957 "method": "bdev_nvme_attach_controller" 00:34:44.957 } 00:34:44.957 EOF 00:34:44.957 )") 00:34:44.957 08:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:34:44.957 08:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:34:44.957 08:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:34:44.957 08:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:44.957 "params": { 00:34:44.957 "name": "Nvme1", 00:34:44.957 "trtype": "tcp", 00:34:44.957 "traddr": "10.0.0.2", 00:34:44.957 "adrfam": "ipv4", 00:34:44.957 "trsvcid": "4420", 00:34:44.957 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:44.957 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:44.957 "hdgst": false, 00:34:44.957 "ddgst": false 00:34:44.957 }, 00:34:44.957 "method": "bdev_nvme_attach_controller" 00:34:44.957 }' 00:34:44.957 [2024-11-28 08:33:42.061830] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:34:44.957 [2024-11-28 08:33:42.061886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2229370 ] 00:34:44.957 [2024-11-28 08:33:42.153019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:44.957 [2024-11-28 08:33:42.205825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:45.217 Running I/O for 10 seconds... 00:34:47.543 6587.00 IOPS, 51.46 MiB/s [2024-11-28T07:33:45.773Z] 6575.00 IOPS, 51.37 MiB/s [2024-11-28T07:33:46.715Z] 6598.33 IOPS, 51.55 MiB/s [2024-11-28T07:33:47.657Z] 6593.00 IOPS, 51.51 MiB/s [2024-11-28T07:33:48.599Z] 7094.00 IOPS, 55.42 MiB/s [2024-11-28T07:33:49.542Z] 7519.83 IOPS, 58.75 MiB/s [2024-11-28T07:33:50.927Z] 7819.00 IOPS, 61.09 MiB/s [2024-11-28T07:33:51.871Z] 8045.88 IOPS, 62.86 MiB/s [2024-11-28T07:33:52.813Z] 8222.67 IOPS, 64.24 MiB/s [2024-11-28T07:33:52.813Z] 8366.30 IOPS, 65.36 MiB/s 00:34:55.524 Latency(us) 00:34:55.524 [2024-11-28T07:33:52.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:55.524 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:34:55.524 Verification LBA range: start 0x0 length 0x1000 00:34:55.524 Nvme1n1 : 10.01 8369.96 65.39 0.00 0.00 15247.32 2484.91 26214.40 00:34:55.524 [2024-11-28T07:33:52.813Z] =================================================================================================================== 00:34:55.524 [2024-11-28T07:33:52.813Z] Total : 8369.96 65.39 0.00 0.00 15247.32 2484.91 26214.40 00:34:55.524 08:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2231310 00:34:55.524 08:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:34:55.524 08:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:55.524 08:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:34:55.524 08:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:34:55.524 08:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:34:55.524 08:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:34:55.524 08:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:55.524 08:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:55.524 { 00:34:55.524 "params": { 00:34:55.524 "name": "Nvme$subsystem", 00:34:55.524 "trtype": "$TEST_TRANSPORT", 00:34:55.524 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:55.524 "adrfam": "ipv4", 00:34:55.524 "trsvcid": "$NVMF_PORT", 00:34:55.524 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:55.524 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:55.524 "hdgst": ${hdgst:-false}, 00:34:55.524 "ddgst": ${ddgst:-false} 00:34:55.524 }, 00:34:55.524 "method": "bdev_nvme_attach_controller" 00:34:55.524 } 00:34:55.524 EOF 00:34:55.524 )") 00:34:55.524 [2024-11-28 08:33:52.632663] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.524 [2024-11-28 08:33:52.632693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.524 08:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:34:55.524 08:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:34:55.524 08:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:34:55.524 08:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:55.524 "params": { 00:34:55.524 "name": "Nvme1", 00:34:55.524 "trtype": "tcp", 00:34:55.524 "traddr": "10.0.0.2", 00:34:55.524 "adrfam": "ipv4", 00:34:55.524 "trsvcid": "4420", 00:34:55.524 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:55.524 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:55.524 "hdgst": false, 00:34:55.524 "ddgst": false 00:34:55.524 }, 00:34:55.524 "method": "bdev_nvme_attach_controller" 00:34:55.524 }' 00:34:55.524 [2024-11-28 08:33:52.644626] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.524 [2024-11-28 08:33:52.644636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.524 [2024-11-28 08:33:52.656622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.524 [2024-11-28 08:33:52.656630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.524 [2024-11-28 08:33:52.668622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.524 [2024-11-28 08:33:52.668630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.524 [2024-11-28 08:33:52.677949] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:34:55.524 [2024-11-28 08:33:52.677997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2231310 ] 00:34:55.524 [2024-11-28 08:33:52.680621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.524 [2024-11-28 08:33:52.680629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.524 [2024-11-28 08:33:52.692621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.524 [2024-11-28 08:33:52.692628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.524 [2024-11-28 08:33:52.704621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.524 [2024-11-28 08:33:52.704628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.524 [2024-11-28 08:33:52.716621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.524 [2024-11-28 08:33:52.716628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.524 [2024-11-28 08:33:52.728620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.524 [2024-11-28 08:33:52.728628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.524 [2024-11-28 08:33:52.740621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.524 [2024-11-28 08:33:52.740628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.524 [2024-11-28 08:33:52.752621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.524 [2024-11-28 08:33:52.752628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.524 [2024-11-28 08:33:52.761494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:55.524 [2024-11-28 08:33:52.764623] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.524 [2024-11-28 08:33:52.764631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.524 [2024-11-28 08:33:52.776621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.524 [2024-11-28 08:33:52.776630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.524 [2024-11-28 08:33:52.788622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.524 [2024-11-28 08:33:52.788632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.524 [2024-11-28 08:33:52.790872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:55.524 [2024-11-28 08:33:52.800623] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.524 [2024-11-28 08:33:52.800632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.785 [2024-11-28 08:33:52.812628] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.785 [2024-11-28 08:33:52.812642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.785 [2024-11-28 08:33:52.824625] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.785 [2024-11-28 08:33:52.824635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.785 [2024-11-28 08:33:52.836623] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.785 [2024-11-28 08:33:52.836633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.785 [2024-11-28 08:33:52.848621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.785 [2024-11-28 08:33:52.848630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.785 [2024-11-28 08:33:52.860634] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.785 [2024-11-28 08:33:52.860651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.785 [2024-11-28 08:33:52.872626] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.785 [2024-11-28 08:33:52.872637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.785 [2024-11-28 08:33:52.884624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.785 [2024-11-28 08:33:52.884635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.785 [2024-11-28 08:33:52.896623] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.785 [2024-11-28 08:33:52.896634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.785 [2024-11-28 08:33:52.908621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.785 [2024-11-28 08:33:52.908628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.785 [2024-11-28 08:33:52.920620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.785 [2024-11-28 08:33:52.920628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.785 [2024-11-28 08:33:52.932621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.785 [2024-11-28 08:33:52.932628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.785 [2024-11-28 08:33:52.944622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.785 [2024-11-28 08:33:52.944631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.785 [2024-11-28 08:33:52.956622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.785 [2024-11-28 08:33:52.956630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.785 [2024-11-28 08:33:52.968621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.785 [2024-11-28 08:33:52.968628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.785 [2024-11-28 08:33:52.980621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.785 [2024-11-28 08:33:52.980628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.785 [2024-11-28 08:33:52.992622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.785 [2024-11-28 08:33:52.992631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.785 [2024-11-28 08:33:53.004621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.785 [2024-11-28 08:33:53.004628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.785 [2024-11-28 08:33:53.016621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.785 [2024-11-28 08:33:53.016629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.785 [2024-11-28 08:33:53.028621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.785 [2024-11-28 08:33:53.028629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.785 [2024-11-28 08:33:53.040628] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.785 [2024-11-28 08:33:53.040643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.785 Running I/O for 5 seconds... 00:34:55.785 [2024-11-28 08:33:53.057948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.785 [2024-11-28 08:33:53.057965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:55.785 [2024-11-28 08:33:53.072183] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:55.785 [2024-11-28 08:33:53.072201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.046 [2024-11-28 08:33:53.085174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.046 [2024-11-28 08:33:53.085191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.046 [2024-11-28 08:33:53.099927] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.046 [2024-11-28 08:33:53.099943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.046 [2024-11-28 08:33:53.112948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.046 [2024-11-28 08:33:53.112963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.046 [2024-11-28 08:33:53.128181] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.046 [2024-11-28 08:33:53.128197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.046 [2024-11-28 08:33:53.141582] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.046 [2024-11-28 08:33:53.141597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.046 [2024-11-28 08:33:53.155884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.046 [2024-11-28 08:33:53.155899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.046 [2024-11-28 08:33:53.168892] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.046 [2024-11-28 08:33:53.168906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.046 [2024-11-28 08:33:53.183981] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.046 [2024-11-28 08:33:53.183996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.046 [2024-11-28 08:33:53.197161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.046 [2024-11-28 08:33:53.197176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.046 [2024-11-28 08:33:53.211502] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.046 [2024-11-28 08:33:53.211517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.046 [2024-11-28 08:33:53.224711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.046 [2024-11-28 08:33:53.224725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.046 [2024-11-28 08:33:53.238106] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.046 [2024-11-28 08:33:53.238120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.046 [2024-11-28 08:33:53.252098] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.046 [2024-11-28 08:33:53.252113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.046 [2024-11-28 08:33:53.265508] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.046 [2024-11-28 08:33:53.265522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.046 [2024-11-28 08:33:53.280327] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.046 [2024-11-28 08:33:53.280341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.046 [2024-11-28 08:33:53.293296] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.046 [2024-11-28 08:33:53.293310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.046 [2024-11-28 08:33:53.308510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.046 [2024-11-28 08:33:53.308525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.046 [2024-11-28 08:33:53.321652] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.046 [2024-11-28 08:33:53.321666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.308 [2024-11-28 08:33:53.335672] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.308 [2024-11-28 08:33:53.335686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.308 [2024-11-28 08:33:53.348834] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.308 [2024-11-28 08:33:53.348848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.308 [2024-11-28 08:33:53.362155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.308 [2024-11-28 08:33:53.362174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.308 [2024-11-28 08:33:53.375871] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.308 [2024-11-28 08:33:53.375885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.308 [2024-11-28 08:33:53.388930] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.308 [2024-11-28 08:33:53.388944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.308 [2024-11-28 08:33:53.404258] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.308 [2024-11-28 08:33:53.404272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.308 [2024-11-28 08:33:53.417242] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.308 [2024-11-28 08:33:53.417256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.308 [2024-11-28 08:33:53.432125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.308 [2024-11-28 08:33:53.432140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.308 [2024-11-28 08:33:53.445197] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.308 [2024-11-28 08:33:53.445211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.308 [2024-11-28 08:33:53.460340] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.308 [2024-11-28 08:33:53.460354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.308 [2024-11-28 08:33:53.473431] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.308 [2024-11-28 08:33:53.473445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.308 [2024-11-28 08:33:53.487263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.308 [2024-11-28 08:33:53.487281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.308 [2024-11-28 08:33:53.500503] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.308 [2024-11-28 08:33:53.500517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.308 [2024-11-28 08:33:53.513624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.308 [2024-11-28 08:33:53.513638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.308 [2024-11-28 08:33:53.528453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.308 [2024-11-28 08:33:53.528467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.308 [2024-11-28 08:33:53.541420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.308 [2024-11-28 08:33:53.541434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.308 [2024-11-28 08:33:53.556060] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.308 [2024-11-28 08:33:53.556074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.308 [2024-11-28 08:33:53.569259] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.308 [2024-11-28 08:33:53.569273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.308 [2024-11-28 08:33:53.583854] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.308 [2024-11-28 08:33:53.583868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.569 [2024-11-28 08:33:53.597011] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.569 [2024-11-28 08:33:53.597025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.569 [2024-11-28 08:33:53.611581] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.569 [2024-11-28 08:33:53.611595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.569 [2024-11-28 08:33:53.624806] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.569 [2024-11-28 08:33:53.624821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.569 [2024-11-28 08:33:53.637800] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.569 [2024-11-28 08:33:53.637814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.569 [2024-11-28 08:33:53.652111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.569 [2024-11-28 08:33:53.652125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.569 [2024-11-28 08:33:53.665009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.569 [2024-11-28 08:33:53.665023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.569 [2024-11-28 08:33:53.680333] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.569 [2024-11-28 08:33:53.680348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.569 [2024-11-28 08:33:53.693608] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.569 [2024-11-28 08:33:53.693622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.569 [2024-11-28 08:33:53.707589] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.569 [2024-11-28 08:33:53.707604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.569 [2024-11-28 08:33:53.720812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.570 [2024-11-28 08:33:53.720826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.570 [2024-11-28 08:33:53.734126] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.570 [2024-11-28 08:33:53.734141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.570 [2024-11-28 08:33:53.748030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.570 [2024-11-28 08:33:53.748053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.570 [2024-11-28 08:33:53.761087] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.570 [2024-11-28 08:33:53.761101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.570 [2024-11-28 08:33:53.775755] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.570 [2024-11-28 08:33:53.775769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.570 [2024-11-28 08:33:53.789043] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.570 [2024-11-28 08:33:53.789057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.570 [2024-11-28 08:33:53.803795] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.570 [2024-11-28 08:33:53.803809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.570 [2024-11-28 08:33:53.817128] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.570 [2024-11-28 08:33:53.817142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.570 [2024-11-28 08:33:53.831900] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.570 [2024-11-28 08:33:53.831915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.570 [2024-11-28 08:33:53.844847] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.570 [2024-11-28 08:33:53.844862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.830 [2024-11-28 08:33:53.857781] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.830 [2024-11-28 08:33:53.857796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.830 [2024-11-28 08:33:53.871722] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.830 [2024-11-28 08:33:53.871736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.830 [2024-11-28 08:33:53.884792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.830 [2024-11-28 08:33:53.884806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.831 [2024-11-28 08:33:53.897536] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.831 [2024-11-28 08:33:53.897550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.831 [2024-11-28 08:33:53.912024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.831 [2024-11-28 08:33:53.912039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.831 [2024-11-28 08:33:53.925388] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.831 [2024-11-28 08:33:53.925403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.831 [2024-11-28 08:33:53.940009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.831 [2024-11-28 08:33:53.940024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.831 [2024-11-28 08:33:53.953046] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.831 [2024-11-28 08:33:53.953061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.831 [2024-11-28 08:33:53.967860] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.831 [2024-11-28 08:33:53.967875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.831 [2024-11-28 08:33:53.980906] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.831 [2024-11-28 08:33:53.980920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.831 [2024-11-28 08:33:53.996077] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.831 [2024-11-28 08:33:53.996092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.831 [2024-11-28 08:33:54.008774] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.831 [2024-11-28 08:33:54.008793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.831 [2024-11-28 08:33:54.021544] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.831 [2024-11-28 08:33:54.021558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.831 [2024-11-28 08:33:54.036243] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.831 [2024-11-28 08:33:54.036258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.831 [2024-11-28 08:33:54.049288] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.831 [2024-11-28 08:33:54.049302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.831 18912.00 IOPS, 147.75 MiB/s [2024-11-28T07:33:54.120Z] [2024-11-28 08:33:54.063825] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.831 [2024-11-28 08:33:54.063839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.831 [2024-11-28 08:33:54.076561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.831 [2024-11-28 08:33:54.076577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.831 [2024-11-28 08:33:54.089831] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.831 [2024-11-28 08:33:54.089845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.831 [2024-11-28 08:33:54.104028] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.831 [2024-11-28 08:33:54.104043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:56.831 [2024-11-28 08:33:54.116921] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:56.831 [2024-11-28 08:33:54.116934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.092 [2024-11-28 08:33:54.131922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.092 [2024-11-28 08:33:54.131936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.092 [2024-11-28 08:33:54.144972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.092 [2024-11-28 08:33:54.144986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.092 [2024-11-28 08:33:54.159854] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.092 [2024-11-28 08:33:54.159869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.092 [2024-11-28 08:33:54.172926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.092 [2024-11-28 08:33:54.172940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.092 [2024-11-28 08:33:54.187723] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.092 [2024-11-28 08:33:54.187739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.092 [2024-11-28 08:33:54.200647] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.092 [2024-11-28 08:33:54.200662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.092 [2024-11-28 08:33:54.213919] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.092 [2024-11-28 08:33:54.213934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.092 [2024-11-28 08:33:54.227827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.092 [2024-11-28 08:33:54.227842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.092 [2024-11-28 08:33:54.241145] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.092 [2024-11-28 08:33:54.241164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.092 [2024-11-28 08:33:54.255883] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.092 [2024-11-28 08:33:54.255898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.092 [2024-11-28 08:33:54.269058] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.092 [2024-11-28 08:33:54.269073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.092 [2024-11-28 08:33:54.284233] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.092 [2024-11-28 08:33:54.284248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.092 [2024-11-28 08:33:54.297358] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.092 [2024-11-28 08:33:54.297372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.092 [2024-11-28 08:33:54.311675] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.092 [2024-11-28 08:33:54.311690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.092 [2024-11-28 08:33:54.324852] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.092 [2024-11-28 08:33:54.324867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.092 [2024-11-28 08:33:54.337566] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.092 [2024-11-28 08:33:54.337580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.092 [2024-11-28 08:33:54.351765] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.092 [2024-11-28 08:33:54.351779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.092 [2024-11-28 08:33:54.364869] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.092 [2024-11-28 08:33:54.364883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.092 [2024-11-28 08:33:54.377779] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.092 [2024-11-28 08:33:54.377793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.353 [2024-11-28 08:33:54.391929] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.353 [2024-11-28 08:33:54.391945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.353 [2024-11-28 08:33:54.404586] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.353 [2024-11-28 08:33:54.404601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.353 [2024-11-28 08:33:54.418059] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.353 [2024-11-28 08:33:54.418073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.353 [2024-11-28 08:33:54.432106] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.353 [2024-11-28 08:33:54.432121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.353 [2024-11-28 08:33:54.445083] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.353 [2024-11-28 08:33:54.445097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.353 [2024-11-28 08:33:54.459610] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.353 [2024-11-28 08:33:54.459626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.353 [2024-11-28 08:33:54.472651] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.353 [2024-11-28 08:33:54.472666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.353 [2024-11-28 08:33:54.485351] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.353 [2024-11-28 08:33:54.485365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.353 [2024-11-28 08:33:54.499751] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.353 [2024-11-28 08:33:54.499766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.353 [2024-11-28 08:33:54.512681] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.353 [2024-11-28 08:33:54.512697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.353 [2024-11-28 08:33:54.526005] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.353 [2024-11-28 08:33:54.526020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.353 [2024-11-28 08:33:54.540061] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.353 [2024-11-28 08:33:54.540076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.353 [2024-11-28 08:33:54.553254] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.353 [2024-11-28 08:33:54.553269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.353 [2024-11-28 08:33:54.568028] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.353 [2024-11-28 08:33:54.568043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.353 [2024-11-28 08:33:54.581105] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.353 [2024-11-28 08:33:54.581119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.353 [2024-11-28 08:33:54.595832] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.353 [2024-11-28 08:33:54.595846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.353 [2024-11-28 08:33:54.608761] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.353 [2024-11-28 08:33:54.608776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.353 [2024-11-28 08:33:54.621276] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.353 [2024-11-28 08:33:54.621290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.353 [2024-11-28 08:33:54.635606] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.353 [2024-11-28 08:33:54.635621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.614 [2024-11-28 08:33:54.648809] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.614 [2024-11-28 08:33:54.648824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.614 [2024-11-28 08:33:54.661728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.614 [2024-11-28 08:33:54.661743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.614 [2024-11-28 08:33:54.675666] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.614 [2024-11-28 08:33:54.675681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.614 [2024-11-28 08:33:54.689054] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.614 [2024-11-28 08:33:54.689068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.614 [2024-11-28 08:33:54.704295] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.614 [2024-11-28 08:33:54.704309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.614 [2024-11-28 08:33:54.717954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.614 [2024-11-28 08:33:54.717968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.614 [2024-11-28 08:33:54.731502] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.614 [2024-11-28 08:33:54.731517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.614 [2024-11-28 08:33:54.744205] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.614 [2024-11-28 08:33:54.744220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.614 [2024-11-28 08:33:54.757578] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.614 [2024-11-28 08:33:54.757593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.614 [2024-11-28 08:33:54.772130] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.614 [2024-11-28 08:33:54.772145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.614 [2024-11-28 08:33:54.785218] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.614 [2024-11-28 08:33:54.785233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.614 [2024-11-28 08:33:54.799837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.614 [2024-11-28 08:33:54.799852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.614 [2024-11-28 08:33:54.813108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.614 [2024-11-28 08:33:54.813123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.614 [2024-11-28 08:33:54.827567] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.614 [2024-11-28 08:33:54.827583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.614 [2024-11-28 08:33:54.840686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.614 [2024-11-28 08:33:54.840701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.614 [2024-11-28 08:33:54.853983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.614 [2024-11-28 08:33:54.853998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.614 [2024-11-28 08:33:54.867871] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.614 [2024-11-28 08:33:54.867886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.614 [2024-11-28 08:33:54.881175] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.614 [2024-11-28 08:33:54.881190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.614 [2024-11-28 08:33:54.896323] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.614 [2024-11-28 08:33:54.896338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.874 [2024-11-28 08:33:54.909520] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.874 [2024-11-28 08:33:54.909535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.874 [2024-11-28 08:33:54.923639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.874 [2024-11-28 08:33:54.923653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.874 [2024-11-28 08:33:54.936768] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.874 [2024-11-28 08:33:54.936783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.874 [2024-11-28 08:33:54.949558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.874 [2024-11-28 08:33:54.949573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.874 [2024-11-28 08:33:54.963536] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.874 [2024-11-28 08:33:54.963551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.874 [2024-11-28 08:33:54.976586] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.874 [2024-11-28 08:33:54.976600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.874 [2024-11-28 08:33:54.989204] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.874 [2024-11-28 08:33:54.989218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.874 [2024-11-28 08:33:55.003540] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.874 [2024-11-28 08:33:55.003554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.874 [2024-11-28 08:33:55.016517] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.874 [2024-11-28 08:33:55.016532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.874 [2024-11-28 08:33:55.029396] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.874 [2024-11-28 08:33:55.029410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.874 [2024-11-28 08:33:55.043884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.874 [2024-11-28 08:33:55.043898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.874 18989.00 IOPS, 148.35 MiB/s [2024-11-28T07:33:55.163Z] [2024-11-28 08:33:55.057062] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.874 [2024-11-28 08:33:55.057075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.874 [2024-11-28 08:33:55.071878] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.874 [2024-11-28 08:33:55.071892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.874 [2024-11-28 08:33:55.084957] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.874 [2024-11-28 08:33:55.084970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.874 [2024-11-28 08:33:55.099912] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.874 [2024-11-28 08:33:55.099926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.874 [2024-11-28 08:33:55.112843] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.874 [2024-11-28 08:33:55.112857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.874 [2024-11-28 08:33:55.125811] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.874 [2024-11-28 08:33:55.125825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.874 [2024-11-28 08:33:55.139848] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.874 [2024-11-28 08:33:55.139862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:57.874 [2024-11-28 08:33:55.153149] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:57.874 [2024-11-28 08:33:55.153166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.135 [2024-11-28 08:33:55.167801] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.135 [2024-11-28 08:33:55.167816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.135 [2024-11-28 08:33:55.181183] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.135 [2024-11-28 08:33:55.181196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.135 [2024-11-28 08:33:55.196138] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.135 [2024-11-28 08:33:55.196152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.135 [2024-11-28 08:33:55.208879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.135 [2024-11-28 08:33:55.208893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.135 [2024-11-28 08:33:55.221977] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.135 [2024-11-28 08:33:55.221991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.135 [2024-11-28 08:33:55.236051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.135 [2024-11-28 08:33:55.236066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.135 [2024-11-28 08:33:55.248813] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.135 [2024-11-28 08:33:55.248828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.135 [2024-11-28 08:33:55.261902] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.135 [2024-11-28 08:33:55.261917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.135 [2024-11-28 08:33:55.276152] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.135 [2024-11-28 08:33:55.276170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.135 [2024-11-28 08:33:55.289213] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.135 [2024-11-28 08:33:55.289231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.135 [2024-11-28 08:33:55.303139] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.135 [2024-11-28 08:33:55.303154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.135 [2024-11-28 08:33:55.315951] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.135 [2024-11-28 08:33:55.315966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.135 [2024-11-28 08:33:55.329155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.135 [2024-11-28 08:33:55.329173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.135 [2024-11-28 08:33:55.343439] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.135 [2024-11-28 08:33:55.343453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.135 [2024-11-28 08:33:55.356184] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.135 [2024-11-28 08:33:55.356198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.135 [2024-11-28 08:33:55.368801] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.135 [2024-11-28 08:33:55.368815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.135 [2024-11-28 08:33:55.381549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.135 [2024-11-28 08:33:55.381564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.135 [2024-11-28 08:33:55.396339] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.135 [2024-11-28 08:33:55.396355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.135 [2024-11-28 08:33:55.409319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.135 [2024-11-28 08:33:55.409333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.395 [2024-11-28 08:33:55.424002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.395 [2024-11-28 08:33:55.424016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.395 [2024-11-28 08:33:55.436803] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.395 [2024-11-28 08:33:55.436818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.395 [2024-11-28 08:33:55.449652] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.395 [2024-11-28 08:33:55.449666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.396 [2024-11-28 08:33:55.463905] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.396 [2024-11-28 08:33:55.463920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.396 [2024-11-28 08:33:55.476731] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.396 [2024-11-28 08:33:55.476746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.396 [2024-11-28 08:33:55.490057] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.396 [2024-11-28 08:33:55.490072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.396 [2024-11-28 08:33:55.503890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.396 [2024-11-28 08:33:55.503904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.396 [2024-11-28 08:33:55.516837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.396 [2024-11-28 08:33:55.516852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.396 [2024-11-28 08:33:55.529657] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.396 [2024-11-28 08:33:55.529670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.396 [2024-11-28 08:33:55.544093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.396 [2024-11-28 08:33:55.544111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.396 [2024-11-28 08:33:55.557265] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.396 [2024-11-28 08:33:55.557279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.396 [2024-11-28 08:33:55.571519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.396 [2024-11-28 08:33:55.571535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.396 [2024-11-28 08:33:55.584714] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.396 [2024-11-28 08:33:55.584728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.396 [2024-11-28 08:33:55.597611] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.396 [2024-11-28 08:33:55.597625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.396 [2024-11-28 08:33:55.612054] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.396 [2024-11-28 08:33:55.612069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.396 [2024-11-28 08:33:55.625293] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.396 [2024-11-28 08:33:55.625308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.396 [2024-11-28 08:33:55.639545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.396 [2024-11-28 08:33:55.639559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.396 [2024-11-28 08:33:55.652355] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.396 [2024-11-28 08:33:55.652370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.396 [2024-11-28 08:33:55.665530] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.396 [2024-11-28 08:33:55.665544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.396 [2024-11-28 08:33:55.679749] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.396 [2024-11-28 08:33:55.679764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.656 [2024-11-28 08:33:55.692897] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.656 [2024-11-28 08:33:55.692912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.656 [2024-11-28 08:33:55.707550] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.656 [2024-11-28 08:33:55.707565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.656 [2024-11-28 08:33:55.720479] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.657 [2024-11-28 08:33:55.720493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.657 [2024-11-28 08:33:55.733806] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.657 [2024-11-28 08:33:55.733820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.657 [2024-11-28 08:33:55.747701] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.657 [2024-11-28 08:33:55.747715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.657 [2024-11-28 08:33:55.760976] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.657 [2024-11-28 08:33:55.760990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.657 [2024-11-28 08:33:55.775879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.657 [2024-11-28 08:33:55.775894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.657 [2024-11-28 08:33:55.788998] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.657 [2024-11-28 08:33:55.789012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.657 [2024-11-28 08:33:55.803850] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.657 [2024-11-28 08:33:55.803869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.657 [2024-11-28 08:33:55.817006] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.657 [2024-11-28 08:33:55.817020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.657 [2024-11-28 08:33:55.831844] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.657 [2024-11-28 08:33:55.831858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.657 [2024-11-28 08:33:55.845114] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.657 [2024-11-28 08:33:55.845128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.657 [2024-11-28 08:33:55.859957] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.657 [2024-11-28 08:33:55.859972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.657 [2024-11-28 08:33:55.872707] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.657 [2024-11-28 08:33:55.872722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.657 [2024-11-28 08:33:55.885465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.657 [2024-11-28 08:33:55.885479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.657 [2024-11-28 08:33:55.900021] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.657 [2024-11-28 08:33:55.900036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.657 [2024-11-28 08:33:55.913013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.657 [2024-11-28 08:33:55.913028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.657 [2024-11-28 08:33:55.928561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.657 [2024-11-28 08:33:55.928577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.657 [2024-11-28 08:33:55.941814] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.657 [2024-11-28 08:33:55.941829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.923 [2024-11-28 08:33:55.955811] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.923 [2024-11-28 08:33:55.955827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.924 [2024-11-28 08:33:55.968954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.924 [2024-11-28 08:33:55.968969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.924 [2024-11-28 08:33:55.984048] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.924 [2024-11-28 08:33:55.984064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.924 [2024-11-28 08:33:55.997002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.924 [2024-11-28 08:33:55.997017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.924 [2024-11-28 08:33:56.011798] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.924 [2024-11-28 08:33:56.011813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.924 [2024-11-28 08:33:56.025104] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.924 [2024-11-28 08:33:56.025118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.924 [2024-11-28 08:33:56.039211] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.924 [2024-11-28 08:33:56.039226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.924 [2024-11-28 08:33:56.052285] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.924 [2024-11-28 08:33:56.052300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.924 19017.67 IOPS, 148.58 MiB/s [2024-11-28T07:33:56.213Z] [2024-11-28 08:33:56.065359] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.924 [2024-11-28 08:33:56.065374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.924 [2024-11-28 08:33:56.079666] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.924 [2024-11-28 08:33:56.079681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.924 [2024-11-28 08:33:56.092896] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.924 [2024-11-28 08:33:56.092910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.924 [2024-11-28 08:33:56.107294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.924 [2024-11-28 08:33:56.107309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.924 [2024-11-28 08:33:56.120286] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.924 [2024-11-28 08:33:56.120301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.924 [2024-11-28 08:33:56.133549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.924 [2024-11-28 08:33:56.133564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.924 [2024-11-28 08:33:56.147639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.924 [2024-11-28 08:33:56.147654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.924 [2024-11-28 08:33:56.160461] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.924 [2024-11-28 08:33:56.160476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.924 [2024-11-28 08:33:56.173218] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.924 [2024-11-28 08:33:56.173232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.924 [2024-11-28 08:33:56.188108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.924 [2024-11-28 08:33:56.188123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:58.924 [2024-11-28 08:33:56.201412] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:58.924 [2024-11-28 08:33:56.201426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.221 [2024-11-28 08:33:56.216080] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.222 [2024-11-28 08:33:56.216095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.222 [2024-11-28 08:33:56.229365] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.222 [2024-11-28 08:33:56.229379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.222 [2024-11-28 08:33:56.244105] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.222 [2024-11-28 08:33:56.244120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.222 [2024-11-28 08:33:56.257065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.222 [2024-11-28 08:33:56.257079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.222 [2024-11-28 08:33:56.272069] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.222 [2024-11-28 08:33:56.272084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.222 [2024-11-28 08:33:56.284964] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.222 [2024-11-28 08:33:56.284979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.222 [2024-11-28 08:33:56.299812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.222 [2024-11-28 08:33:56.299827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.222 [2024-11-28 08:33:56.312944] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.222 [2024-11-28 08:33:56.312958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.222 [2024-11-28 08:33:56.327229] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.222 [2024-11-28 08:33:56.327244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.222 [2024-11-28 08:33:56.340111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.222 [2024-11-28 08:33:56.340127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.222 [2024-11-28 08:33:56.353401] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.222 [2024-11-28 08:33:56.353416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.222 [2024-11-28 08:33:56.368230] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.222 [2024-11-28 08:33:56.368245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.222 [2024-11-28 08:33:56.381466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.222 [2024-11-28 08:33:56.381481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.222 [2024-11-28 08:33:56.395996] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.222 [2024-11-28 08:33:56.396010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.222 [2024-11-28 08:33:56.409352] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.222 [2024-11-28 08:33:56.409367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.222 [2024-11-28 08:33:56.424014] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.222 [2024-11-28 08:33:56.424029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.222 [2024-11-28 08:33:56.437228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.222 [2024-11-28 08:33:56.437242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.222 [2024-11-28 08:33:56.451918] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.222 [2024-11-28 08:33:56.451933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.222 [2024-11-28 08:33:56.465023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.222 [2024-11-28 08:33:56.465037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.222 [2024-11-28 08:33:56.479569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.222 [2024-11-28 08:33:56.479583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.222 [2024-11-28 08:33:56.492257] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.222 [2024-11-28 08:33:56.492272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.222 [2024-11-28 08:33:56.505302] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.222 [2024-11-28 08:33:56.505316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.525 [2024-11-28 08:33:56.519652] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.525 [2024-11-28 08:33:56.519668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.525 [2024-11-28 08:33:56.532645] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.525 [2024-11-28 08:33:56.532660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.525 [2024-11-28 08:33:56.545549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.525 [2024-11-28 08:33:56.545563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.525 [2024-11-28 08:33:56.560132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.525 [2024-11-28 08:33:56.560147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.525 [2024-11-28 08:33:56.573101] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.525 [2024-11-28 08:33:56.573116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.525 [2024-11-28 08:33:56.587821] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.525 [2024-11-28 08:33:56.587836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.525 [2024-11-28 08:33:56.601055] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.525 [2024-11-28 08:33:56.601069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.525 [2024-11-28 08:33:56.616323] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.525 [2024-11-28 08:33:56.616338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.525 [2024-11-28 08:33:56.629407] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.525 [2024-11-28 08:33:56.629421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.525 [2024-11-28 08:33:56.643672] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.525 [2024-11-28 08:33:56.643687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.525 [2024-11-28 08:33:56.656495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.525 [2024-11-28 08:33:56.656510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.525 [2024-11-28 08:33:56.669751] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.525 [2024-11-28 08:33:56.669765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.525 [2024-11-28 08:33:56.683966] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.525 [2024-11-28 08:33:56.683981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.525 [2024-11-28 08:33:56.697011] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.525 [2024-11-28 08:33:56.697025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.525 [2024-11-28 08:33:56.711700] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.525 [2024-11-28 08:33:56.711715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.525 [2024-11-28 08:33:56.724927] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.525 [2024-11-28 08:33:56.724941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.525 [2024-11-28 08:33:56.739750] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.525 [2024-11-28 08:33:56.739765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.525 [2024-11-28 08:33:56.752602] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.525 [2024-11-28 08:33:56.752617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.525 [2024-11-28 08:33:56.765340] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.525 [2024-11-28 08:33:56.765354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.525 [2024-11-28 08:33:56.779949] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.525 [2024-11-28 08:33:56.779964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.525 [2024-11-28 08:33:56.793236] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.525 [2024-11-28 08:33:56.793250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.816 [2024-11-28 08:33:56.808070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.816 [2024-11-28 08:33:56.808085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.816 [2024-11-28 08:33:56.821022] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.816 [2024-11-28 08:33:56.821036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.816 [2024-11-28 08:33:56.835699] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.816 [2024-11-28 08:33:56.835717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.816 [2024-11-28 08:33:56.848810] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.816 [2024-11-28 08:33:56.848824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.816 [2024-11-28 08:33:56.861820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.816 [2024-11-28 08:33:56.861834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.816 [2024-11-28 08:33:56.875801] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.816 [2024-11-28 08:33:56.875816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.816 [2024-11-28 08:33:56.888935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.816 [2024-11-28 08:33:56.888948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.816 [2024-11-28 08:33:56.903549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.816 [2024-11-28 08:33:56.903564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.816 [2024-11-28 08:33:56.916315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.816 [2024-11-28 08:33:56.916330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.816 [2024-11-28 08:33:56.929089] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.816 [2024-11-28 08:33:56.929102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.816 [2024-11-28 08:33:56.944048] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.816 [2024-11-28 08:33:56.944063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.816 [2024-11-28 08:33:56.957257] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.816 [2024-11-28 08:33:56.957271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.816 [2024-11-28 08:33:56.971697] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.816 [2024-11-28 08:33:56.971712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.816 [2024-11-28 08:33:56.984419] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.816 [2024-11-28 08:33:56.984435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.816 [2024-11-28 08:33:56.997215] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.816 [2024-11-28 08:33:56.997231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.816 [2024-11-28 08:33:57.011659] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.816 [2024-11-28 08:33:57.011674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.816 [2024-11-28 08:33:57.024824] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.816 [2024-11-28 08:33:57.024839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.816 [2024-11-28 08:33:57.037336] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.816 [2024-11-28 08:33:57.037350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.816 [2024-11-28 08:33:57.052195] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.816 [2024-11-28 08:33:57.052209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.816 19037.25 IOPS, 148.73 MiB/s [2024-11-28T07:33:57.105Z] [2024-11-28 08:33:57.065396] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.816 [2024-11-28 08:33:57.065410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:59.816 [2024-11-28 08:33:57.080110] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:59.816 [2024-11-28 08:33:57.080125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.081 [2024-11-28 08:33:57.093177] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.081 [2024-11-28 08:33:57.093195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.081 [2024-11-28 08:33:57.107809] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.081 [2024-11-28 08:33:57.107824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.081 [2024-11-28 08:33:57.120840] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.081 [2024-11-28 08:33:57.120855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.081 [2024-11-28 08:33:57.133745] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.081 [2024-11-28 08:33:57.133759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.081 [2024-11-28 08:33:57.147884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.081 [2024-11-28 08:33:57.147899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.081 [2024-11-28 08:33:57.161127] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.081 [2024-11-28 08:33:57.161141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.081 [2024-11-28 08:33:57.175675] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.081 [2024-11-28 08:33:57.175690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.081 [2024-11-28 08:33:57.188563] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.081 [2024-11-28 08:33:57.188579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.081 [2024-11-28 08:33:57.201423] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.081 [2024-11-28 08:33:57.201436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.081 [2024-11-28 08:33:57.215780] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.081 [2024-11-28 08:33:57.215795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.081 [2024-11-28 08:33:57.229137] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.081 [2024-11-28 08:33:57.229151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.081 [2024-11-28 08:33:57.243689] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.081 [2024-11-28 08:33:57.243703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.081 [2024-11-28 08:33:57.256850] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.081 [2024-11-28 08:33:57.256864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.081 [2024-11-28 08:33:57.269503] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.081 [2024-11-28 08:33:57.269517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.081 [2024-11-28 08:33:57.283847] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.081 [2024-11-28 08:33:57.283862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.081 [2024-11-28 08:33:57.297070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.081 [2024-11-28 08:33:57.297084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.081 [2024-11-28 08:33:57.312001] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.081 [2024-11-28 08:33:57.312015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.081 [2024-11-28 08:33:57.325198] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.081 [2024-11-28 08:33:57.325212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.081 [2024-11-28 08:33:57.340305] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.081 [2024-11-28 08:33:57.340320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.081 [2024-11-28 08:33:57.353265] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.081 [2024-11-28 08:33:57.353289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.341 [2024-11-28 08:33:57.367784] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.341 [2024-11-28 08:33:57.367799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.341 [2024-11-28 08:33:57.380478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.341 [2024-11-28 08:33:57.380493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.341 [2024-11-28 08:33:57.394033] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.341 [2024-11-28 08:33:57.394047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.341 [2024-11-28 08:33:57.407861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.341 [2024-11-28 08:33:57.407876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.341 [2024-11-28 08:33:57.420826] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.341 [2024-11-28 08:33:57.420840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.341 [2024-11-28 08:33:57.433605] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.341 [2024-11-28 08:33:57.433619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.341 [2024-11-28 08:33:57.447623] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.341 [2024-11-28 08:33:57.447638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.341 [2024-11-28 08:33:57.460668] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.341 [2024-11-28 08:33:57.460682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.342 [2024-11-28 08:33:57.473223] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.342 [2024-11-28 08:33:57.473237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.342 [2024-11-28 08:33:57.487830] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.342 [2024-11-28 08:33:57.487844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.342 [2024-11-28 08:33:57.500977] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.342 [2024-11-28 08:33:57.500991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.342 [2024-11-28 08:33:57.515698] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.342 [2024-11-28 08:33:57.515713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.342 [2024-11-28 08:33:57.528501] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.342 [2024-11-28 08:33:57.528515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.342 [2024-11-28 08:33:57.541500] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.342 [2024-11-28 08:33:57.541514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.342 [2024-11-28 08:33:57.555443] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.342 [2024-11-28 08:33:57.555457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.342 [2024-11-28 08:33:57.568518] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.342 [2024-11-28 08:33:57.568533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.342 [2024-11-28 08:33:57.581381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.342 [2024-11-28 08:33:57.581395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.342 [2024-11-28 08:33:57.595947] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.342 [2024-11-28 08:33:57.595962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.342 [2024-11-28 08:33:57.609327] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.342 [2024-11-28 08:33:57.609341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.342 [2024-11-28 08:33:57.623879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.342 [2024-11-28 08:33:57.623893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.602 [2024-11-28 08:33:57.637247] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.602 [2024-11-28 08:33:57.637261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.602 [2024-11-28 08:33:57.651888] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.602 [2024-11-28 08:33:57.651902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.602 [2024-11-28 08:33:57.665013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.602 [2024-11-28 08:33:57.665027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.602 [2024-11-28 08:33:57.679693] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.602 [2024-11-28 08:33:57.679707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.602 [2024-11-28 08:33:57.692485] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.602 [2024-11-28 08:33:57.692500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.602 [2024-11-28 08:33:57.705323] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.602 [2024-11-28 08:33:57.705337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.603 [2024-11-28 08:33:57.719909] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.603 [2024-11-28 08:33:57.719924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.603 [2024-11-28 08:33:57.732703] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.603 [2024-11-28 08:33:57.732718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.603 [2024-11-28 08:33:57.746142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.603 [2024-11-28 08:33:57.746157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.603 [2024-11-28 08:33:57.760378] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.603 [2024-11-28 08:33:57.760393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.603 [2024-11-28 08:33:57.773237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.603 [2024-11-28 08:33:57.773252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.603 [2024-11-28 08:33:57.787864] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.603 [2024-11-28 08:33:57.787879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.603 [2024-11-28 08:33:57.800942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.603 [2024-11-28 08:33:57.800956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.603 [2024-11-28 08:33:57.815792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.603 [2024-11-28 08:33:57.815807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.603 [2024-11-28 08:33:57.828797] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.603 [2024-11-28 08:33:57.828811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.603 [2024-11-28 08:33:57.841725] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.603 [2024-11-28 08:33:57.841740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.603 [2024-11-28 08:33:57.855868] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.603 [2024-11-28 08:33:57.855884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.603 [2024-11-28 08:33:57.869308] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.603 [2024-11-28 08:33:57.869322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.603 [2024-11-28 08:33:57.883479] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.603 [2024-11-28 08:33:57.883494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.870 [2024-11-28 08:33:57.896501] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.870 [2024-11-28 08:33:57.896517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.870 [2024-11-28 08:33:57.909174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.870 [2024-11-28 08:33:57.909189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.870 [2024-11-28 08:33:57.923815] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.870 [2024-11-28 08:33:57.923831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.870 [2024-11-28 08:33:57.937066] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.870 [2024-11-28 08:33:57.937080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.870 [2024-11-28 08:33:57.951743] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.870 [2024-11-28 08:33:57.951757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.870 [2024-11-28 08:33:57.965283] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.870 [2024-11-28 08:33:57.965298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.870 [2024-11-28 08:33:57.979814] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.870 [2024-11-28 08:33:57.979828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.870 [2024-11-28 08:33:57.992775] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.870 [2024-11-28 08:33:57.992790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.870 [2024-11-28 08:33:58.005620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.870 [2024-11-28 08:33:58.005635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.870 [2024-11-28 08:33:58.020057] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.870 [2024-11-28 08:33:58.020072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.870 [2024-11-28 08:33:58.033143] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.870 [2024-11-28 08:33:58.033162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.870 [2024-11-28 08:33:58.047411] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.870 [2024-11-28 08:33:58.047426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.870 [2024-11-28 08:33:58.060186] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.870 [2024-11-28 08:33:58.060201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.870 19034.60 IOPS, 148.71 MiB/s 00:35:00.870 Latency(us) 00:35:00.870 [2024-11-28T07:33:58.159Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:00.870 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:35:00.870 Nvme1n1 : 5.01 19037.77 148.73 0.00 0.00 6717.64 2607.79 12288.00 00:35:00.870 [2024-11-28T07:33:58.159Z] =================================================================================================================== 00:35:00.870 [2024-11-28T07:33:58.159Z] Total : 19037.77 148.73 0.00 0.00 6717.64 2607.79 12288.00 00:35:00.870 [2024-11-28 08:33:58.068629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.870 [2024-11-28 08:33:58.068648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.870 [2024-11-28 08:33:58.080625] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.870 [2024-11-28 08:33:58.080639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.870 [2024-11-28 08:33:58.092631] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.870 [2024-11-28 08:33:58.092646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.870 [2024-11-28 08:33:58.104626] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.870 [2024-11-28 08:33:58.104639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.870 [2024-11-28 08:33:58.116624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.870 [2024-11-28 08:33:58.116636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.870 [2024-11-28 08:33:58.128623] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.870 [2024-11-28 08:33:58.128633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.870 [2024-11-28 08:33:58.140622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.870 [2024-11-28 08:33:58.140632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:00.870 [2024-11-28 08:33:58.152625] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:00.870 [2024-11-28 08:33:58.152637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:01.132 [2024-11-28 08:33:58.164622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:01.132 [2024-11-28 08:33:58.164632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:01.132 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2231310) - No such process 00:35:01.132 08:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2231310 00:35:01.132 08:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:01.132 08:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.132 08:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:01.132 08:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.132 08:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:01.132 08:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.132 08:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:01.132 delay0 00:35:01.132 08:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.132 08:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:35:01.132 08:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.132 08:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:01.132 08:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.132 08:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:35:01.132 [2024-11-28 08:33:58.369322] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:35:09.269 Initializing NVMe Controllers 00:35:09.269 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:09.269 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:09.269 Initialization complete. Launching workers. 00:35:09.269 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 218, failed: 41577 00:35:09.269 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 41642, failed to submit 153 00:35:09.269 success 41579, unsuccessful 63, failed 0 00:35:09.269 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:35:09.269 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:35:09.269 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:09.269 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:35:09.269 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:09.269 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:35:09.269 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:09.269 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:09.269 rmmod nvme_tcp 00:35:09.269 rmmod nvme_fabrics 00:35:09.269 rmmod nvme_keyring 00:35:09.269 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:09.269 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:35:09.269 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:35:09.269 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2229191 ']' 00:35:09.269 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2229191 00:35:09.269 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2229191 ']' 00:35:09.269 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2229191 00:35:09.269 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:35:09.269 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:09.269 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2229191 00:35:09.269 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:09.269 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:09.269 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2229191' 00:35:09.269 killing process with pid 2229191 00:35:09.269 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2229191 00:35:09.269 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2229191 00:35:09.269 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:09.269 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:09.269 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:09.269 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:35:09.269 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:35:09.269 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:09.269 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:35:09.269 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:09.269 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:09.269 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:09.269 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:09.269 08:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:10.651 08:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:10.651 00:35:10.651 real 0m34.346s 00:35:10.651 user 0m44.138s 00:35:10.651 sys 0m12.314s 00:35:10.651 08:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:10.651 08:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:10.651 ************************************ 00:35:10.651 END TEST nvmf_zcopy 00:35:10.651 ************************************ 00:35:10.651 08:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:35:10.651 08:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:10.651 08:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:10.651 08:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:10.651 ************************************ 00:35:10.651 START TEST nvmf_nmic 00:35:10.651 ************************************ 00:35:10.651 08:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:35:10.651 * Looking for test storage... 00:35:10.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:10.651 08:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:10.651 08:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:35:10.651 08:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:10.912 08:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:10.912 08:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:10.912 08:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:10.912 08:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:10.912 08:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:35:10.912 08:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:35:10.912 08:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:35:10.912 08:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:35:10.912 08:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:35:10.912 08:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:35:10.912 08:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:35:10.912 08:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:10.912 08:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:35:10.912 08:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:35:10.912 08:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:10.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.912 --rc genhtml_branch_coverage=1 00:35:10.912 --rc genhtml_function_coverage=1 00:35:10.912 --rc genhtml_legend=1 00:35:10.912 --rc geninfo_all_blocks=1 00:35:10.912 --rc geninfo_unexecuted_blocks=1 00:35:10.912 00:35:10.912 ' 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:10.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.912 --rc genhtml_branch_coverage=1 00:35:10.912 --rc genhtml_function_coverage=1 00:35:10.912 --rc genhtml_legend=1 00:35:10.912 --rc geninfo_all_blocks=1 00:35:10.912 --rc geninfo_unexecuted_blocks=1 00:35:10.912 00:35:10.912 ' 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:10.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.912 --rc genhtml_branch_coverage=1 00:35:10.912 --rc genhtml_function_coverage=1 00:35:10.912 --rc genhtml_legend=1 00:35:10.912 --rc geninfo_all_blocks=1 00:35:10.912 --rc geninfo_unexecuted_blocks=1 00:35:10.912 00:35:10.912 ' 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:10.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:10.912 --rc genhtml_branch_coverage=1 00:35:10.912 --rc genhtml_function_coverage=1 00:35:10.912 --rc genhtml_legend=1 00:35:10.912 --rc geninfo_all_blocks=1 00:35:10.912 --rc geninfo_unexecuted_blocks=1 00:35:10.912 00:35:10.912 ' 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:10.912 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:10.913 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.913 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.913 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.913 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:35:10.913 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:10.913 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:35:10.913 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:10.913 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:10.913 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:10.913 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:10.913 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:10.913 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:10.913 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:10.913 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:10.913 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:10.913 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:10.913 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:10.913 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:10.913 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:35:10.913 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:10.913 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:10.913 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:10.913 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:10.913 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:10.913 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:10.913 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:10.913 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:10.913 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:10.913 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:10.913 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:35:10.913 08:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:19.050 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:19.050 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:35:19.050 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:19.050 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:19.050 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:19.050 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:19.050 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:19.050 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:35:19.050 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:19.050 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:35:19.050 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:35:19.050 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:35:19.050 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:35:19.050 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:35:19.050 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:35:19.050 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:19.050 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:19.050 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:19.050 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:19.050 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:19.050 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:19.050 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:19.050 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:19.050 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:19.050 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:19.050 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:19.050 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:19.050 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:19.050 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:19.050 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:19.050 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:19.051 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:19.051 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:19.051 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:19.051 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:19.051 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:19.051 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:19.051 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:19.051 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:19.051 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:19.051 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:19.051 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:19.051 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:19.051 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:19.051 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:19.051 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:19.051 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:19.051 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:19.051 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:19.051 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:19.051 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:19.051 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:19.051 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:19.051 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:19.051 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:19.051 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:19.051 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:19.051 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:19.051 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:19.051 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:19.051 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:19.051 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:19.051 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:19.051 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:19.051 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:19.051 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:19.052 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:19.052 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:19.052 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:19.052 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:19.052 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:19.052 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:19.052 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:19.052 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:35:19.052 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:19.052 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:19.052 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:19.052 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:19.052 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:19.052 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:19.052 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:19.052 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:19.052 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:19.053 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:19.053 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:19.053 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:19.053 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:19.053 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:19.053 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:19.053 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:19.053 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:19.053 08:34:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:19.053 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:19.053 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:19.053 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:19.053 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:19.053 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:19.053 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:19.054 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:19.054 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:19.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:19.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:35:19.054 00:35:19.054 --- 10.0.0.2 ping statistics --- 00:35:19.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:19.054 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:35:19.054 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:19.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:19.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:35:19.054 00:35:19.054 --- 10.0.0.1 ping statistics --- 00:35:19.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:19.054 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:35:19.054 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:19.054 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:35:19.054 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:19.054 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:19.054 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:19.054 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:19.054 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:19.054 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:19.054 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:19.055 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:35:19.055 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:19.055 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:19.055 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:19.055 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2237923 00:35:19.055 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2237923 00:35:19.055 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:35:19.055 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2237923 ']' 00:35:19.055 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:19.055 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:19.055 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:19.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:19.055 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:19.055 08:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:19.055 [2024-11-28 08:34:15.357495] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:19.055 [2024-11-28 08:34:15.358799] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:35:19.056 [2024-11-28 08:34:15.358848] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:19.056 [2024-11-28 08:34:15.453422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:19.056 [2024-11-28 08:34:15.491081] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:19.056 [2024-11-28 08:34:15.491113] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:19.056 [2024-11-28 08:34:15.491121] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:19.056 [2024-11-28 08:34:15.491128] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:19.056 [2024-11-28 08:34:15.491134] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:19.056 [2024-11-28 08:34:15.492875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:19.056 [2024-11-28 08:34:15.493024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:19.056 [2024-11-28 08:34:15.493191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:19.056 [2024-11-28 08:34:15.493214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:19.056 [2024-11-28 08:34:15.549769] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:19.056 [2024-11-28 08:34:15.550770] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:19.056 [2024-11-28 08:34:15.550933] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:19.056 [2024-11-28 08:34:15.551885] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:19.056 [2024-11-28 08:34:15.551911] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:19.056 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:19.056 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:35:19.056 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:19.057 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:19.057 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:19.057 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:19.057 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:19.057 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.057 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:19.057 [2024-11-28 08:34:16.178076] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:19.057 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.057 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:19.057 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.057 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:19.057 Malloc0 00:35:19.057 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.057 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:19.057 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.057 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:19.057 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.057 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:19.057 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.057 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:19.057 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.057 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:19.058 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.058 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:19.058 [2024-11-28 08:34:16.266147] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:19.058 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.058 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:35:19.058 test case1: single bdev can't be used in multiple subsystems 00:35:19.058 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:35:19.058 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.058 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:19.058 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.058 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:19.058 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.058 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:19.058 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.058 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:35:19.059 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:35:19.059 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.059 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:19.059 [2024-11-28 08:34:16.301672] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:35:19.059 [2024-11-28 08:34:16.301692] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:35:19.059 [2024-11-28 08:34:16.301700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:19.059 request: 00:35:19.059 { 00:35:19.059 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:35:19.059 "namespace": { 00:35:19.059 "bdev_name": "Malloc0", 00:35:19.059 "no_auto_visible": false, 00:35:19.059 "hide_metadata": false 00:35:19.059 }, 00:35:19.059 "method": "nvmf_subsystem_add_ns", 00:35:19.059 "req_id": 1 00:35:19.059 } 00:35:19.059 Got JSON-RPC error response 00:35:19.059 response: 00:35:19.059 { 00:35:19.059 "code": -32602, 00:35:19.059 "message": "Invalid parameters" 00:35:19.059 } 00:35:19.059 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:19.059 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:35:19.059 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:35:19.059 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:35:19.059 Adding namespace failed - expected result. 00:35:19.059 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:35:19.059 test case2: host connect to nvmf target in multiple paths 00:35:19.060 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:19.060 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.060 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:19.060 [2024-11-28 08:34:16.313771] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:19.060 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.060 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:19.643 08:34:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:35:20.214 08:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:35:20.214 08:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:35:20.214 08:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:20.214 08:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:35:20.214 08:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:35:22.126 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:22.126 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:22.126 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:22.126 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:35:22.126 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:22.126 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:35:22.126 08:34:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:35:22.126 [global] 00:35:22.126 thread=1 00:35:22.126 invalidate=1 00:35:22.126 rw=write 00:35:22.126 time_based=1 00:35:22.126 runtime=1 00:35:22.126 ioengine=libaio 00:35:22.126 direct=1 00:35:22.126 bs=4096 00:35:22.126 iodepth=1 00:35:22.126 norandommap=0 00:35:22.126 numjobs=1 00:35:22.126 00:35:22.126 verify_dump=1 00:35:22.126 verify_backlog=512 00:35:22.126 verify_state_save=0 00:35:22.126 do_verify=1 00:35:22.126 verify=crc32c-intel 00:35:22.126 [job0] 00:35:22.126 filename=/dev/nvme0n1 00:35:22.126 Could not set queue depth (nvme0n1) 00:35:22.387 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:22.387 fio-3.35 00:35:22.387 Starting 1 thread 00:35:23.775 00:35:23.775 job0: (groupid=0, jobs=1): err= 0: pid=2238947: Thu Nov 28 08:34:20 2024 00:35:23.775 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:23.775 slat (nsec): min=7604, max=55946, avg=26335.43, stdev=1899.60 00:35:23.775 clat (usec): min=611, max=1212, avg=955.45, stdev=57.77 00:35:23.775 lat (usec): min=638, max=1239, avg=981.78, stdev=57.68 00:35:23.775 clat percentiles (usec): 00:35:23.775 | 1.00th=[ 791], 5.00th=[ 848], 10.00th=[ 889], 20.00th=[ 922], 00:35:23.775 | 30.00th=[ 938], 40.00th=[ 947], 50.00th=[ 963], 60.00th=[ 971], 00:35:23.775 | 70.00th=[ 988], 80.00th=[ 996], 90.00th=[ 1020], 95.00th=[ 1037], 00:35:23.775 | 99.00th=[ 1090], 99.50th=[ 1123], 99.90th=[ 1221], 99.95th=[ 1221], 00:35:23.775 | 99.99th=[ 1221] 00:35:23.775 write: IOPS=834, BW=3337KiB/s (3417kB/s)(3340KiB/1001msec); 0 zone resets 00:35:23.775 slat (nsec): min=8955, max=68949, avg=30251.83, stdev=9769.50 00:35:23.775 clat (usec): min=266, max=1079, avg=553.44, stdev=96.92 00:35:23.775 lat (usec): min=277, max=1114, avg=583.69, stdev=100.42 00:35:23.775 clat percentiles (usec): 00:35:23.775 | 1.00th=[ 330], 5.00th=[ 371], 10.00th=[ 437], 20.00th=[ 465], 00:35:23.775 | 30.00th=[ 519], 40.00th=[ 545], 50.00th=[ 553], 60.00th=[ 570], 00:35:23.775 | 70.00th=[ 611], 80.00th=[ 635], 90.00th=[ 668], 95.00th=[ 709], 00:35:23.775 | 99.00th=[ 758], 99.50th=[ 791], 99.90th=[ 1074], 99.95th=[ 1074], 00:35:23.775 | 99.99th=[ 1074] 00:35:23.775 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:35:23.775 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:23.775 lat (usec) : 500=16.85%, 750=44.69%, 1000=31.92% 00:35:23.775 lat (msec) : 2=6.53% 00:35:23.775 cpu : usr=4.30%, sys=3.60%, ctx=1347, majf=0, minf=1 00:35:23.775 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:23.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.775 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:23.775 issued rwts: total=512,835,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:23.775 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:23.775 00:35:23.775 Run status group 0 (all jobs): 00:35:23.775 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:35:23.775 WRITE: bw=3337KiB/s (3417kB/s), 3337KiB/s-3337KiB/s (3417kB/s-3417kB/s), io=3340KiB (3420kB), run=1001-1001msec 00:35:23.775 00:35:23.775 Disk stats (read/write): 00:35:23.775 nvme0n1: ios=562/664, merge=0/0, ticks=523/292, in_queue=815, util=93.39% 00:35:23.775 08:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:23.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:35:23.775 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:23.775 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:35:23.775 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:23.775 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:23.775 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:23.775 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:23.775 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:35:23.775 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:35:23.775 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:35:23.775 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:23.775 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:35:23.775 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:23.775 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:35:23.775 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:23.775 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:23.775 rmmod nvme_tcp 00:35:23.775 rmmod nvme_fabrics 00:35:24.037 rmmod nvme_keyring 00:35:24.037 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:24.037 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:35:24.037 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:35:24.037 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2237923 ']' 00:35:24.037 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2237923 00:35:24.037 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2237923 ']' 00:35:24.037 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2237923 00:35:24.037 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:35:24.037 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:24.037 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2237923 00:35:24.037 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:24.037 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:24.037 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2237923' 00:35:24.037 killing process with pid 2237923 00:35:24.037 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2237923 00:35:24.037 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2237923 00:35:24.037 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:24.037 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:24.037 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:24.037 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:35:24.037 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:35:24.037 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:24.037 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:35:24.037 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:24.037 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:24.037 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:24.037 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:24.037 08:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:26.584 00:35:26.584 real 0m15.551s 00:35:26.584 user 0m33.095s 00:35:26.584 sys 0m7.306s 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:26.584 ************************************ 00:35:26.584 END TEST nvmf_nmic 00:35:26.584 ************************************ 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:26.584 ************************************ 00:35:26.584 START TEST nvmf_fio_target 00:35:26.584 ************************************ 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:35:26.584 * Looking for test storage... 00:35:26.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:26.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:26.584 --rc genhtml_branch_coverage=1 00:35:26.584 --rc genhtml_function_coverage=1 00:35:26.584 --rc genhtml_legend=1 00:35:26.584 --rc geninfo_all_blocks=1 00:35:26.584 --rc geninfo_unexecuted_blocks=1 00:35:26.584 00:35:26.584 ' 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:26.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:26.584 --rc genhtml_branch_coverage=1 00:35:26.584 --rc genhtml_function_coverage=1 00:35:26.584 --rc genhtml_legend=1 00:35:26.584 --rc geninfo_all_blocks=1 00:35:26.584 --rc geninfo_unexecuted_blocks=1 00:35:26.584 00:35:26.584 ' 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:26.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:26.584 --rc genhtml_branch_coverage=1 00:35:26.584 --rc genhtml_function_coverage=1 00:35:26.584 --rc genhtml_legend=1 00:35:26.584 --rc geninfo_all_blocks=1 00:35:26.584 --rc geninfo_unexecuted_blocks=1 00:35:26.584 00:35:26.584 ' 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:26.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:26.584 --rc genhtml_branch_coverage=1 00:35:26.584 --rc genhtml_function_coverage=1 00:35:26.584 --rc genhtml_legend=1 00:35:26.584 --rc geninfo_all_blocks=1 00:35:26.584 --rc geninfo_unexecuted_blocks=1 00:35:26.584 00:35:26.584 ' 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:26.584 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:35:26.585 08:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:34.736 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:34.736 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:35:34.736 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:34.736 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:34.736 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:34.736 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:34.736 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:34.736 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:35:34.736 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:34.736 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:35:34.736 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:35:34.736 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:35:34.736 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:35:34.736 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:35:34.736 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:35:34.736 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:34.736 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:34.736 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:34.736 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:34.737 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:34.737 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:34.737 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:34.737 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:34.737 08:34:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:34.737 08:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:34.737 08:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:34.737 08:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:34.737 08:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:34.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:34.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:35:34.737 00:35:34.737 --- 10.0.0.2 ping statistics --- 00:35:34.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.737 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:35:34.737 08:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:34.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:34.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:35:34.737 00:35:34.737 --- 10.0.0.1 ping statistics --- 00:35:34.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.737 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:35:34.737 08:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:34.737 08:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:35:34.737 08:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:34.737 08:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:34.737 08:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:34.737 08:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:34.737 08:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:34.737 08:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:34.737 08:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:34.737 08:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:35:34.737 08:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:34.738 08:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:34.738 08:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:34.738 08:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2243474 00:35:34.738 08:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2243474 00:35:34.738 08:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:35:34.738 08:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2243474 ']' 00:35:34.738 08:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:34.738 08:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:34.738 08:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:34.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:34.738 08:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:34.738 08:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:34.738 [2024-11-28 08:34:31.199050] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:34.738 [2024-11-28 08:34:31.200199] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:35:34.738 [2024-11-28 08:34:31.200253] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:34.738 [2024-11-28 08:34:31.299225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:34.738 [2024-11-28 08:34:31.352180] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:34.738 [2024-11-28 08:34:31.352230] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:34.738 [2024-11-28 08:34:31.352238] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:34.738 [2024-11-28 08:34:31.352245] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:34.738 [2024-11-28 08:34:31.352252] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:34.738 [2024-11-28 08:34:31.354645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:34.738 [2024-11-28 08:34:31.354805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:34.738 [2024-11-28 08:34:31.354964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:34.738 [2024-11-28 08:34:31.354965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:34.738 [2024-11-28 08:34:31.433325] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:34.738 [2024-11-28 08:34:31.434227] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:34.738 [2024-11-28 08:34:31.434456] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:34.738 [2024-11-28 08:34:31.435074] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:34.738 [2024-11-28 08:34:31.435081] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:34.999 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:34.999 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:35:34.999 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:34.999 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:34.999 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:34.999 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:34.999 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:34.999 [2024-11-28 08:34:32.239980] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:35.260 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:35.260 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:35:35.260 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:35.521 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:35:35.521 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:35.782 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:35:35.782 08:34:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:36.043 08:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:35:36.043 08:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:35:36.305 08:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:36.305 08:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:35:36.305 08:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:36.568 08:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:35:36.568 08:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:36.830 08:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:35:36.830 08:34:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:35:37.092 08:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:37.092 08:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:35:37.092 08:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:37.354 08:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:35:37.354 08:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:35:37.616 08:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:37.616 [2024-11-28 08:34:34.843987] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:37.616 08:34:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:35:37.877 08:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:35:38.138 08:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:38.710 08:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:35:38.710 08:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:35:38.710 08:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:38.710 08:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:35:38.710 08:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:35:38.710 08:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:35:40.624 08:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:40.624 08:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:40.624 08:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:40.624 08:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:35:40.624 08:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:40.624 08:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:35:40.624 08:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:35:40.624 [global] 00:35:40.624 thread=1 00:35:40.624 invalidate=1 00:35:40.624 rw=write 00:35:40.624 time_based=1 00:35:40.624 runtime=1 00:35:40.624 ioengine=libaio 00:35:40.624 direct=1 00:35:40.624 bs=4096 00:35:40.624 iodepth=1 00:35:40.624 norandommap=0 00:35:40.624 numjobs=1 00:35:40.624 00:35:40.624 verify_dump=1 00:35:40.624 verify_backlog=512 00:35:40.624 verify_state_save=0 00:35:40.624 do_verify=1 00:35:40.624 verify=crc32c-intel 00:35:40.624 [job0] 00:35:40.624 filename=/dev/nvme0n1 00:35:40.624 [job1] 00:35:40.624 filename=/dev/nvme0n2 00:35:40.624 [job2] 00:35:40.624 filename=/dev/nvme0n3 00:35:40.624 [job3] 00:35:40.624 filename=/dev/nvme0n4 00:35:40.624 Could not set queue depth (nvme0n1) 00:35:40.624 Could not set queue depth (nvme0n2) 00:35:40.624 Could not set queue depth (nvme0n3) 00:35:40.624 Could not set queue depth (nvme0n4) 00:35:41.194 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:41.194 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:41.194 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:41.194 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:41.194 fio-3.35 00:35:41.194 Starting 4 threads 00:35:42.599 00:35:42.599 job0: (groupid=0, jobs=1): err= 0: pid=2244986: Thu Nov 28 08:34:39 2024 00:35:42.599 read: IOPS=25, BW=104KiB/s (106kB/s)(108KiB/1041msec) 00:35:42.599 slat (nsec): min=8421, max=29461, avg=21731.04, stdev=8059.01 00:35:42.599 clat (usec): min=897, max=42131, avg=29924.73, stdev=18499.01 00:35:42.599 lat (usec): min=924, max=42158, avg=29946.46, stdev=18496.71 00:35:42.599 clat percentiles (usec): 00:35:42.599 | 1.00th=[ 898], 5.00th=[ 988], 10.00th=[ 1045], 20.00th=[ 1139], 00:35:42.599 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:35:42.599 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:35:42.599 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:42.599 | 99.99th=[42206] 00:35:42.599 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:35:42.599 slat (nsec): min=5988, max=46271, avg=8795.61, stdev=2357.26 00:35:42.599 clat (usec): min=174, max=829, avg=440.24, stdev=102.08 00:35:42.599 lat (usec): min=180, max=838, avg=449.04, stdev=102.63 00:35:42.599 clat percentiles (usec): 00:35:42.599 | 1.00th=[ 196], 5.00th=[ 277], 10.00th=[ 306], 20.00th=[ 338], 00:35:42.599 | 30.00th=[ 388], 40.00th=[ 424], 50.00th=[ 461], 60.00th=[ 482], 00:35:42.599 | 70.00th=[ 506], 80.00th=[ 523], 90.00th=[ 553], 95.00th=[ 594], 00:35:42.599 | 99.00th=[ 652], 99.50th=[ 660], 99.90th=[ 832], 99.95th=[ 832], 00:35:42.599 | 99.99th=[ 832] 00:35:42.599 bw ( KiB/s): min= 4096, max= 4096, per=39.55%, avg=4096.00, stdev= 0.00, samples=1 00:35:42.599 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:42.599 lat (usec) : 250=3.71%, 500=61.41%, 750=29.68%, 1000=0.56% 00:35:42.599 lat (msec) : 2=0.93%, 10=0.19%, 50=3.53% 00:35:42.599 cpu : usr=0.19%, sys=0.67%, ctx=542, majf=0, minf=1 00:35:42.599 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:42.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.599 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.599 issued rwts: total=27,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.599 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:42.599 job1: (groupid=0, jobs=1): err= 0: pid=2245004: Thu Nov 28 08:34:39 2024 00:35:42.599 read: IOPS=640, BW=2561KiB/s (2623kB/s)(2564KiB/1001msec) 00:35:42.599 slat (nsec): min=6744, max=44402, avg=23344.40, stdev=7549.92 00:35:42.599 clat (usec): min=163, max=1102, avg=766.86, stdev=85.88 00:35:42.599 lat (usec): min=170, max=1128, avg=790.20, stdev=87.74 00:35:42.599 clat percentiles (usec): 00:35:42.599 | 1.00th=[ 545], 5.00th=[ 635], 10.00th=[ 676], 20.00th=[ 709], 00:35:42.599 | 30.00th=[ 734], 40.00th=[ 750], 50.00th=[ 775], 60.00th=[ 783], 00:35:42.599 | 70.00th=[ 799], 80.00th=[ 824], 90.00th=[ 865], 95.00th=[ 914], 00:35:42.599 | 99.00th=[ 963], 99.50th=[ 971], 99.90th=[ 1106], 99.95th=[ 1106], 00:35:42.599 | 99.99th=[ 1106] 00:35:42.599 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:35:42.599 slat (nsec): min=9559, max=54939, avg=29377.89, stdev=9486.68 00:35:42.599 clat (usec): min=146, max=735, avg=440.80, stdev=87.70 00:35:42.599 lat (usec): min=157, max=768, avg=470.18, stdev=92.24 00:35:42.599 clat percentiles (usec): 00:35:42.599 | 1.00th=[ 255], 5.00th=[ 297], 10.00th=[ 318], 20.00th=[ 359], 00:35:42.599 | 30.00th=[ 392], 40.00th=[ 416], 50.00th=[ 445], 60.00th=[ 469], 00:35:42.599 | 70.00th=[ 494], 80.00th=[ 519], 90.00th=[ 545], 95.00th=[ 578], 00:35:42.599 | 99.00th=[ 627], 99.50th=[ 652], 99.90th=[ 685], 99.95th=[ 734], 00:35:42.599 | 99.99th=[ 734] 00:35:42.599 bw ( KiB/s): min= 4096, max= 4096, per=39.55%, avg=4096.00, stdev= 0.00, samples=1 00:35:42.599 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:42.599 lat (usec) : 250=0.60%, 500=44.98%, 750=30.63%, 1000=23.66% 00:35:42.599 lat (msec) : 2=0.12% 00:35:42.599 cpu : usr=2.60%, sys=4.40%, ctx=1665, majf=0, minf=2 00:35:42.599 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:42.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.599 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.599 issued rwts: total=641,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.599 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:42.599 job2: (groupid=0, jobs=1): err= 0: pid=2245038: Thu Nov 28 08:34:39 2024 00:35:42.599 read: IOPS=80, BW=322KiB/s (330kB/s)(332KiB/1030msec) 00:35:42.599 slat (nsec): min=27864, max=58359, avg=28670.33, stdev=3311.47 00:35:42.599 clat (usec): min=881, max=42049, avg=8523.51, stdev=15766.97 00:35:42.599 lat (usec): min=910, max=42078, avg=8552.18, stdev=15766.85 00:35:42.599 clat percentiles (usec): 00:35:42.599 | 1.00th=[ 881], 5.00th=[ 996], 10.00th=[ 1029], 20.00th=[ 1090], 00:35:42.599 | 30.00th=[ 1123], 40.00th=[ 1156], 50.00th=[ 1205], 60.00th=[ 1237], 00:35:42.599 | 70.00th=[ 1287], 80.00th=[ 1401], 90.00th=[41681], 95.00th=[42206], 00:35:42.599 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:42.599 | 99.99th=[42206] 00:35:42.599 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:35:42.599 slat (nsec): min=9858, max=57257, avg=33615.21, stdev=9473.68 00:35:42.599 clat (usec): min=255, max=867, avg=579.86, stdev=116.15 00:35:42.599 lat (usec): min=268, max=905, avg=613.47, stdev=119.45 00:35:42.600 clat percentiles (usec): 00:35:42.600 | 1.00th=[ 297], 5.00th=[ 359], 10.00th=[ 429], 20.00th=[ 482], 00:35:42.600 | 30.00th=[ 523], 40.00th=[ 553], 50.00th=[ 586], 60.00th=[ 619], 00:35:42.600 | 70.00th=[ 652], 80.00th=[ 685], 90.00th=[ 725], 95.00th=[ 750], 00:35:42.600 | 99.00th=[ 816], 99.50th=[ 832], 99.90th=[ 865], 99.95th=[ 865], 00:35:42.600 | 99.99th=[ 865] 00:35:42.600 bw ( KiB/s): min= 4096, max= 4096, per=39.55%, avg=4096.00, stdev= 0.00, samples=1 00:35:42.600 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:42.600 lat (usec) : 500=20.84%, 750=60.67%, 1000=5.38% 00:35:42.600 lat (msec) : 2=10.59%, 50=2.52% 00:35:42.600 cpu : usr=1.17%, sys=2.43%, ctx=596, majf=0, minf=1 00:35:42.600 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:42.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.600 issued rwts: total=83,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.600 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:42.600 job3: (groupid=0, jobs=1): err= 0: pid=2245049: Thu Nov 28 08:34:39 2024 00:35:42.600 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:42.600 slat (nsec): min=8017, max=61028, avg=26615.99, stdev=3956.68 00:35:42.600 clat (usec): min=645, max=1307, avg=1054.62, stdev=90.64 00:35:42.600 lat (usec): min=671, max=1333, avg=1081.24, stdev=90.51 00:35:42.600 clat percentiles (usec): 00:35:42.600 | 1.00th=[ 807], 5.00th=[ 881], 10.00th=[ 930], 20.00th=[ 996], 00:35:42.600 | 30.00th=[ 1029], 40.00th=[ 1045], 50.00th=[ 1057], 60.00th=[ 1090], 00:35:42.600 | 70.00th=[ 1106], 80.00th=[ 1123], 90.00th=[ 1156], 95.00th=[ 1188], 00:35:42.600 | 99.00th=[ 1221], 99.50th=[ 1254], 99.90th=[ 1303], 99.95th=[ 1303], 00:35:42.600 | 99.99th=[ 1303] 00:35:42.600 write: IOPS=646, BW=2585KiB/s (2647kB/s)(2588KiB/1001msec); 0 zone resets 00:35:42.600 slat (nsec): min=10515, max=65373, avg=32059.29, stdev=9064.47 00:35:42.600 clat (usec): min=171, max=1135, avg=640.78, stdev=137.34 00:35:42.600 lat (usec): min=185, max=1170, avg=672.84, stdev=140.78 00:35:42.600 clat percentiles (usec): 00:35:42.600 | 1.00th=[ 334], 5.00th=[ 408], 10.00th=[ 457], 20.00th=[ 519], 00:35:42.600 | 30.00th=[ 578], 40.00th=[ 611], 50.00th=[ 644], 60.00th=[ 685], 00:35:42.600 | 70.00th=[ 725], 80.00th=[ 758], 90.00th=[ 807], 95.00th=[ 848], 00:35:42.600 | 99.00th=[ 938], 99.50th=[ 971], 99.90th=[ 1139], 99.95th=[ 1139], 00:35:42.600 | 99.99th=[ 1139] 00:35:42.600 bw ( KiB/s): min= 4096, max= 4096, per=39.55%, avg=4096.00, stdev= 0.00, samples=1 00:35:42.600 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:42.600 lat (usec) : 250=0.26%, 500=9.15%, 750=33.82%, 1000=21.83% 00:35:42.600 lat (msec) : 2=34.94% 00:35:42.600 cpu : usr=1.70%, sys=3.60%, ctx=1161, majf=0, minf=1 00:35:42.600 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:42.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.600 issued rwts: total=512,647,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.600 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:42.600 00:35:42.600 Run status group 0 (all jobs): 00:35:42.600 READ: bw=4853KiB/s (4969kB/s), 104KiB/s-2561KiB/s (106kB/s-2623kB/s), io=5052KiB (5173kB), run=1001-1041msec 00:35:42.600 WRITE: bw=10.1MiB/s (10.6MB/s), 1967KiB/s-4092KiB/s (2015kB/s-4190kB/s), io=10.5MiB (11.0MB), run=1001-1041msec 00:35:42.600 00:35:42.600 Disk stats (read/write): 00:35:42.600 nvme0n1: ios=57/512, merge=0/0, ticks=1984/214, in_queue=2198, util=96.71% 00:35:42.600 nvme0n2: ios=560/793, merge=0/0, ticks=831/333, in_queue=1164, util=94.10% 00:35:42.600 nvme0n3: ios=34/512, merge=0/0, ticks=1376/233, in_queue=1609, util=96.25% 00:35:42.600 nvme0n4: ios=419/512, merge=0/0, ticks=1284/311, in_queue=1595, util=96.05% 00:35:42.600 08:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:35:42.600 [global] 00:35:42.600 thread=1 00:35:42.600 invalidate=1 00:35:42.600 rw=randwrite 00:35:42.600 time_based=1 00:35:42.600 runtime=1 00:35:42.600 ioengine=libaio 00:35:42.600 direct=1 00:35:42.600 bs=4096 00:35:42.600 iodepth=1 00:35:42.600 norandommap=0 00:35:42.600 numjobs=1 00:35:42.600 00:35:42.600 verify_dump=1 00:35:42.600 verify_backlog=512 00:35:42.600 verify_state_save=0 00:35:42.600 do_verify=1 00:35:42.600 verify=crc32c-intel 00:35:42.600 [job0] 00:35:42.600 filename=/dev/nvme0n1 00:35:42.600 [job1] 00:35:42.600 filename=/dev/nvme0n2 00:35:42.600 [job2] 00:35:42.600 filename=/dev/nvme0n3 00:35:42.600 [job3] 00:35:42.600 filename=/dev/nvme0n4 00:35:42.600 Could not set queue depth (nvme0n1) 00:35:42.600 Could not set queue depth (nvme0n2) 00:35:42.600 Could not set queue depth (nvme0n3) 00:35:42.600 Could not set queue depth (nvme0n4) 00:35:42.861 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:42.861 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:42.861 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:42.861 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:42.861 fio-3.35 00:35:42.861 Starting 4 threads 00:35:44.271 00:35:44.271 job0: (groupid=0, jobs=1): err= 0: pid=2245468: Thu Nov 28 08:34:41 2024 00:35:44.271 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:44.271 slat (nsec): min=10826, max=60039, avg=26815.36, stdev=2865.19 00:35:44.271 clat (usec): min=580, max=1869, avg=971.32, stdev=75.25 00:35:44.271 lat (usec): min=607, max=1895, avg=998.13, stdev=75.28 00:35:44.271 clat percentiles (usec): 00:35:44.271 | 1.00th=[ 799], 5.00th=[ 857], 10.00th=[ 898], 20.00th=[ 930], 00:35:44.271 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 971], 60.00th=[ 988], 00:35:44.271 | 70.00th=[ 996], 80.00th=[ 1012], 90.00th=[ 1037], 95.00th=[ 1057], 00:35:44.271 | 99.00th=[ 1123], 99.50th=[ 1156], 99.90th=[ 1876], 99.95th=[ 1876], 00:35:44.271 | 99.99th=[ 1876] 00:35:44.271 write: IOPS=725, BW=2901KiB/s (2971kB/s)(2904KiB/1001msec); 0 zone resets 00:35:44.271 slat (nsec): min=5691, max=65397, avg=29140.57, stdev=10687.67 00:35:44.271 clat (usec): min=251, max=1049, avg=631.63, stdev=137.78 00:35:44.271 lat (usec): min=263, max=1083, avg=660.77, stdev=143.04 00:35:44.271 clat percentiles (usec): 00:35:44.271 | 1.00th=[ 343], 5.00th=[ 383], 10.00th=[ 441], 20.00th=[ 519], 00:35:44.271 | 30.00th=[ 578], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 660], 00:35:44.271 | 70.00th=[ 693], 80.00th=[ 742], 90.00th=[ 807], 95.00th=[ 865], 00:35:44.271 | 99.00th=[ 938], 99.50th=[ 996], 99.90th=[ 1057], 99.95th=[ 1057], 00:35:44.271 | 99.99th=[ 1057] 00:35:44.271 bw ( KiB/s): min= 4096, max= 4096, per=45.95%, avg=4096.00, stdev= 0.00, samples=1 00:35:44.271 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:44.271 lat (usec) : 500=10.58%, 750=37.40%, 1000=39.90% 00:35:44.271 lat (msec) : 2=12.12% 00:35:44.271 cpu : usr=2.00%, sys=3.40%, ctx=1242, majf=0, minf=1 00:35:44.271 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:44.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.271 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.271 issued rwts: total=512,726,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.271 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:44.271 job1: (groupid=0, jobs=1): err= 0: pid=2245486: Thu Nov 28 08:34:41 2024 00:35:44.271 read: IOPS=17, BW=71.7KiB/s (73.4kB/s)(72.0KiB/1004msec) 00:35:44.271 slat (nsec): min=25167, max=25689, avg=25458.61, stdev=135.15 00:35:44.271 clat (usec): min=1040, max=42090, avg=39546.84, stdev=9615.13 00:35:44.271 lat (usec): min=1065, max=42115, avg=39572.30, stdev=9615.15 00:35:44.271 clat percentiles (usec): 00:35:44.271 | 1.00th=[ 1037], 5.00th=[ 1037], 10.00th=[41157], 20.00th=[41681], 00:35:44.271 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:35:44.271 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:44.271 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:44.271 | 99.99th=[42206] 00:35:44.271 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:35:44.271 slat (nsec): min=9411, max=60481, avg=30650.20, stdev=7447.46 00:35:44.271 clat (usec): min=149, max=1107, avg=529.26, stdev=155.59 00:35:44.271 lat (usec): min=180, max=1122, avg=559.91, stdev=156.43 00:35:44.271 clat percentiles (usec): 00:35:44.271 | 1.00th=[ 223], 5.00th=[ 285], 10.00th=[ 326], 20.00th=[ 388], 00:35:44.271 | 30.00th=[ 445], 40.00th=[ 486], 50.00th=[ 529], 60.00th=[ 570], 00:35:44.271 | 70.00th=[ 611], 80.00th=[ 652], 90.00th=[ 717], 95.00th=[ 791], 00:35:44.271 | 99.00th=[ 898], 99.50th=[ 1090], 99.90th=[ 1106], 99.95th=[ 1106], 00:35:44.271 | 99.99th=[ 1106] 00:35:44.271 bw ( KiB/s): min= 4096, max= 4096, per=45.95%, avg=4096.00, stdev= 0.00, samples=1 00:35:44.271 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:44.271 lat (usec) : 250=2.08%, 500=40.00%, 750=46.60%, 1000=7.36% 00:35:44.271 lat (msec) : 2=0.75%, 50=3.21% 00:35:44.271 cpu : usr=1.40%, sys=0.90%, ctx=530, majf=0, minf=1 00:35:44.271 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:44.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.271 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.271 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.271 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:44.271 job2: (groupid=0, jobs=1): err= 0: pid=2245504: Thu Nov 28 08:34:41 2024 00:35:44.271 read: IOPS=18, BW=75.2KiB/s (77.1kB/s)(76.0KiB/1010msec) 00:35:44.271 slat (nsec): min=10233, max=26348, avg=25066.05, stdev=3599.30 00:35:44.271 clat (usec): min=900, max=42089, avg=39270.59, stdev=9303.87 00:35:44.271 lat (usec): min=910, max=42115, avg=39295.66, stdev=9307.46 00:35:44.271 clat percentiles (usec): 00:35:44.271 | 1.00th=[ 898], 5.00th=[ 898], 10.00th=[41157], 20.00th=[41157], 00:35:44.271 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:35:44.272 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:35:44.272 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:44.272 | 99.99th=[42206] 00:35:44.272 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:35:44.272 slat (nsec): min=9646, max=68035, avg=30205.10, stdev=8788.30 00:35:44.272 clat (usec): min=191, max=1050, avg=476.59, stdev=155.42 00:35:44.272 lat (usec): min=224, max=1084, avg=506.80, stdev=157.42 00:35:44.272 clat percentiles (usec): 00:35:44.272 | 1.00th=[ 210], 5.00th=[ 265], 10.00th=[ 306], 20.00th=[ 347], 00:35:44.272 | 30.00th=[ 371], 40.00th=[ 412], 50.00th=[ 461], 60.00th=[ 502], 00:35:44.272 | 70.00th=[ 545], 80.00th=[ 594], 90.00th=[ 676], 95.00th=[ 791], 00:35:44.272 | 99.00th=[ 938], 99.50th=[ 979], 99.90th=[ 1057], 99.95th=[ 1057], 00:35:44.272 | 99.99th=[ 1057] 00:35:44.272 bw ( KiB/s): min= 4096, max= 4096, per=45.95%, avg=4096.00, stdev= 0.00, samples=1 00:35:44.272 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:44.272 lat (usec) : 250=3.20%, 500=54.61%, 750=31.64%, 1000=6.78% 00:35:44.272 lat (msec) : 2=0.38%, 50=3.39% 00:35:44.272 cpu : usr=0.79%, sys=1.49%, ctx=531, majf=0, minf=1 00:35:44.272 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:44.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.272 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.272 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:44.272 job3: (groupid=0, jobs=1): err= 0: pid=2245510: Thu Nov 28 08:34:41 2024 00:35:44.272 read: IOPS=15, BW=63.1KiB/s (64.6kB/s)(64.0KiB/1015msec) 00:35:44.272 slat (nsec): min=27456, max=27893, avg=27630.88, stdev=110.57 00:35:44.272 clat (usec): min=957, max=42157, avg=39378.46, stdev=10246.60 00:35:44.272 lat (usec): min=985, max=42185, avg=39406.09, stdev=10246.53 00:35:44.272 clat percentiles (usec): 00:35:44.272 | 1.00th=[ 955], 5.00th=[ 955], 10.00th=[41681], 20.00th=[41681], 00:35:44.272 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:35:44.272 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:44.272 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:44.272 | 99.99th=[42206] 00:35:44.272 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:35:44.272 slat (nsec): min=9151, max=61658, avg=31745.59, stdev=8614.77 00:35:44.272 clat (usec): min=261, max=1158, avg=709.94, stdev=155.14 00:35:44.272 lat (usec): min=284, max=1201, avg=741.69, stdev=157.44 00:35:44.272 clat percentiles (usec): 00:35:44.272 | 1.00th=[ 351], 5.00th=[ 433], 10.00th=[ 498], 20.00th=[ 586], 00:35:44.272 | 30.00th=[ 644], 40.00th=[ 685], 50.00th=[ 725], 60.00th=[ 750], 00:35:44.272 | 70.00th=[ 791], 80.00th=[ 840], 90.00th=[ 889], 95.00th=[ 963], 00:35:44.272 | 99.00th=[ 1057], 99.50th=[ 1106], 99.90th=[ 1156], 99.95th=[ 1156], 00:35:44.272 | 99.99th=[ 1156] 00:35:44.272 bw ( KiB/s): min= 4096, max= 4096, per=45.95%, avg=4096.00, stdev= 0.00, samples=1 00:35:44.272 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:44.272 lat (usec) : 500=10.04%, 750=47.35%, 1000=38.07% 00:35:44.272 lat (msec) : 2=1.70%, 50=2.84% 00:35:44.272 cpu : usr=1.18%, sys=1.97%, ctx=528, majf=0, minf=1 00:35:44.272 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:44.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.272 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.272 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:44.272 00:35:44.272 Run status group 0 (all jobs): 00:35:44.272 READ: bw=2227KiB/s (2280kB/s), 63.1KiB/s-2046KiB/s (64.6kB/s-2095kB/s), io=2260KiB (2314kB), run=1001-1015msec 00:35:44.272 WRITE: bw=8914KiB/s (9128kB/s), 2018KiB/s-2901KiB/s (2066kB/s-2971kB/s), io=9048KiB (9265kB), run=1001-1015msec 00:35:44.272 00:35:44.272 Disk stats (read/write): 00:35:44.272 nvme0n1: ios=498/512, merge=0/0, ticks=1411/323, in_queue=1734, util=96.49% 00:35:44.272 nvme0n2: ios=50/512, merge=0/0, ticks=664/253, in_queue=917, util=98.98% 00:35:44.272 nvme0n3: ios=68/512, merge=0/0, ticks=645/235, in_queue=880, util=95.23% 00:35:44.272 nvme0n4: ios=11/512, merge=0/0, ticks=421/279, in_queue=700, util=89.48% 00:35:44.272 08:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:35:44.272 [global] 00:35:44.272 thread=1 00:35:44.272 invalidate=1 00:35:44.272 rw=write 00:35:44.272 time_based=1 00:35:44.272 runtime=1 00:35:44.272 ioengine=libaio 00:35:44.272 direct=1 00:35:44.272 bs=4096 00:35:44.272 iodepth=128 00:35:44.272 norandommap=0 00:35:44.272 numjobs=1 00:35:44.272 00:35:44.272 verify_dump=1 00:35:44.272 verify_backlog=512 00:35:44.272 verify_state_save=0 00:35:44.272 do_verify=1 00:35:44.272 verify=crc32c-intel 00:35:44.272 [job0] 00:35:44.272 filename=/dev/nvme0n1 00:35:44.272 [job1] 00:35:44.272 filename=/dev/nvme0n2 00:35:44.272 [job2] 00:35:44.272 filename=/dev/nvme0n3 00:35:44.272 [job3] 00:35:44.272 filename=/dev/nvme0n4 00:35:44.272 Could not set queue depth (nvme0n1) 00:35:44.272 Could not set queue depth (nvme0n2) 00:35:44.272 Could not set queue depth (nvme0n3) 00:35:44.272 Could not set queue depth (nvme0n4) 00:35:44.533 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:44.533 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:44.533 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:44.533 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:44.533 fio-3.35 00:35:44.533 Starting 4 threads 00:35:45.942 00:35:45.942 job0: (groupid=0, jobs=1): err= 0: pid=2245922: Thu Nov 28 08:34:42 2024 00:35:45.942 read: IOPS=6483, BW=25.3MiB/s (26.6MB/s)(25.4MiB/1004msec) 00:35:45.942 slat (nsec): min=897, max=19878k, avg=77201.61, stdev=616601.36 00:35:45.942 clat (usec): min=1775, max=37514, avg=10680.77, stdev=5144.26 00:35:45.942 lat (usec): min=1779, max=37521, avg=10757.98, stdev=5169.19 00:35:45.942 clat percentiles (usec): 00:35:45.942 | 1.00th=[ 2442], 5.00th=[ 5080], 10.00th=[ 6849], 20.00th=[ 7767], 00:35:45.942 | 30.00th=[ 8160], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9765], 00:35:45.942 | 70.00th=[11076], 80.00th=[13042], 90.00th=[16909], 95.00th=[22676], 00:35:45.942 | 99.00th=[30540], 99.50th=[31851], 99.90th=[32113], 99.95th=[32113], 00:35:45.942 | 99.99th=[37487] 00:35:45.942 write: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec); 0 zone resets 00:35:45.942 slat (nsec): min=1547, max=11286k, avg=62263.24, stdev=444529.38 00:35:45.942 clat (usec): min=545, max=26617, avg=8698.55, stdev=3653.31 00:35:45.942 lat (usec): min=1006, max=26979, avg=8760.81, stdev=3678.83 00:35:45.942 clat percentiles (usec): 00:35:45.942 | 1.00th=[ 1876], 5.00th=[ 3621], 10.00th=[ 4817], 20.00th=[ 6390], 00:35:45.942 | 30.00th=[ 7046], 40.00th=[ 7570], 50.00th=[ 8225], 60.00th=[ 8717], 00:35:45.942 | 70.00th=[ 9765], 80.00th=[10290], 90.00th=[13173], 95.00th=[15664], 00:35:45.942 | 99.00th=[22414], 99.50th=[24511], 99.90th=[26084], 99.95th=[26346], 00:35:45.942 | 99.99th=[26608] 00:35:45.942 bw ( KiB/s): min=24576, max=28672, per=25.37%, avg=26624.00, stdev=2896.31, samples=2 00:35:45.942 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:35:45.942 lat (usec) : 750=0.01%, 1000=0.01% 00:35:45.942 lat (msec) : 2=0.73%, 4=3.89%, 10=63.83%, 20=27.59%, 50=3.95% 00:35:45.942 cpu : usr=4.09%, sys=6.88%, ctx=561, majf=0, minf=1 00:35:45.942 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:35:45.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.942 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:45.942 issued rwts: total=6509,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.942 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:45.942 job1: (groupid=0, jobs=1): err= 0: pid=2245943: Thu Nov 28 08:34:42 2024 00:35:45.942 read: IOPS=7822, BW=30.6MiB/s (32.0MB/s)(30.6MiB/1002msec) 00:35:45.942 slat (nsec): min=937, max=10449k, avg=63518.02, stdev=441066.41 00:35:45.942 clat (usec): min=1563, max=27317, avg=8468.10, stdev=2857.90 00:35:45.942 lat (usec): min=2502, max=27342, avg=8531.62, stdev=2878.17 00:35:45.942 clat percentiles (usec): 00:35:45.942 | 1.00th=[ 3490], 5.00th=[ 5342], 10.00th=[ 5997], 20.00th=[ 6652], 00:35:45.942 | 30.00th=[ 6915], 40.00th=[ 7439], 50.00th=[ 7767], 60.00th=[ 8291], 00:35:45.942 | 70.00th=[ 8979], 80.00th=[10290], 90.00th=[11338], 95.00th=[13698], 00:35:45.942 | 99.00th=[23200], 99.50th=[23200], 99.90th=[24773], 99.95th=[24773], 00:35:45.942 | 99.99th=[27395] 00:35:45.942 write: IOPS=8175, BW=31.9MiB/s (33.5MB/s)(32.0MiB/1002msec); 0 zone resets 00:35:45.942 slat (nsec): min=1595, max=16425k, avg=54243.96, stdev=374800.89 00:35:45.942 clat (usec): min=849, max=17689, avg=7171.39, stdev=1771.91 00:35:45.942 lat (usec): min=852, max=17702, avg=7225.64, stdev=1780.78 00:35:45.942 clat percentiles (usec): 00:35:45.942 | 1.00th=[ 2769], 5.00th=[ 4359], 10.00th=[ 5014], 20.00th=[ 5997], 00:35:45.942 | 30.00th=[ 6587], 40.00th=[ 6783], 50.00th=[ 7046], 60.00th=[ 7504], 00:35:45.942 | 70.00th=[ 8029], 80.00th=[ 8586], 90.00th=[ 8979], 95.00th=[ 9765], 00:35:45.942 | 99.00th=[11863], 99.50th=[13698], 99.90th=[17695], 99.95th=[17695], 00:35:45.942 | 99.99th=[17695] 00:35:45.942 bw ( KiB/s): min=32768, max=32768, per=31.22%, avg=32768.00, stdev= 0.00, samples=2 00:35:45.942 iops : min= 8192, max= 8192, avg=8192.00, stdev= 0.00, samples=2 00:35:45.942 lat (usec) : 1000=0.02% 00:35:45.942 lat (msec) : 2=0.26%, 4=2.47%, 10=84.59%, 20=12.07%, 50=0.59% 00:35:45.942 cpu : usr=5.29%, sys=7.39%, ctx=658, majf=0, minf=1 00:35:45.942 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:35:45.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.942 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:45.942 issued rwts: total=7838,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.942 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:45.942 job2: (groupid=0, jobs=1): err= 0: pid=2245960: Thu Nov 28 08:34:42 2024 00:35:45.942 read: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec) 00:35:45.942 slat (nsec): min=946, max=14278k, avg=73013.59, stdev=521349.83 00:35:45.942 clat (usec): min=2781, max=26930, avg=9910.23, stdev=3470.94 00:35:45.942 lat (usec): min=2787, max=26945, avg=9983.24, stdev=3494.47 00:35:45.942 clat percentiles (usec): 00:35:45.942 | 1.00th=[ 3818], 5.00th=[ 5604], 10.00th=[ 6390], 20.00th=[ 7701], 00:35:45.942 | 30.00th=[ 8225], 40.00th=[ 8848], 50.00th=[ 9503], 60.00th=[10290], 00:35:45.942 | 70.00th=[10814], 80.00th=[11469], 90.00th=[13173], 95.00th=[15664], 00:35:45.942 | 99.00th=[25822], 99.50th=[26084], 99.90th=[26870], 99.95th=[26870], 00:35:45.942 | 99.99th=[26870] 00:35:45.942 write: IOPS=6891, BW=26.9MiB/s (28.2MB/s)(27.0MiB/1003msec); 0 zone resets 00:35:45.942 slat (nsec): min=1640, max=8988.7k, avg=66181.69, stdev=469515.95 00:35:45.942 clat (usec): min=784, max=33691, avg=8841.42, stdev=3430.30 00:35:45.942 lat (usec): min=1451, max=33693, avg=8907.60, stdev=3444.60 00:35:45.942 clat percentiles (usec): 00:35:45.942 | 1.00th=[ 2409], 5.00th=[ 4293], 10.00th=[ 5211], 20.00th=[ 6390], 00:35:45.942 | 30.00th=[ 7242], 40.00th=[ 7832], 50.00th=[ 8455], 60.00th=[ 9372], 00:35:45.942 | 70.00th=[ 9896], 80.00th=[10683], 90.00th=[11994], 95.00th=[15008], 00:35:45.942 | 99.00th=[21103], 99.50th=[21627], 99.90th=[30278], 99.95th=[33817], 00:35:45.942 | 99.99th=[33817] 00:35:45.942 bw ( KiB/s): min=23696, max=30576, per=25.86%, avg=27136.00, stdev=4864.89, samples=2 00:35:45.942 iops : min= 5924, max= 7644, avg=6784.00, stdev=1216.22, samples=2 00:35:45.942 lat (usec) : 1000=0.01% 00:35:45.942 lat (msec) : 2=0.19%, 4=2.68%, 10=61.02%, 20=34.33%, 50=1.78% 00:35:45.943 cpu : usr=4.79%, sys=6.39%, ctx=460, majf=0, minf=2 00:35:45.943 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:35:45.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.943 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:45.943 issued rwts: total=6656,6912,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:45.943 job3: (groupid=0, jobs=1): err= 0: pid=2245966: Thu Nov 28 08:34:42 2024 00:35:45.943 read: IOPS=4451, BW=17.4MiB/s (18.2MB/s)(17.5MiB/1005msec) 00:35:45.943 slat (nsec): min=934, max=18612k, avg=93574.84, stdev=754719.44 00:35:45.943 clat (usec): min=936, max=28411, avg=11884.79, stdev=4897.01 00:35:45.943 lat (usec): min=2326, max=29878, avg=11978.37, stdev=4934.57 00:35:45.943 clat percentiles (usec): 00:35:45.943 | 1.00th=[ 4621], 5.00th=[ 6587], 10.00th=[ 7177], 20.00th=[ 7832], 00:35:45.943 | 30.00th=[ 8848], 40.00th=[ 9372], 50.00th=[10159], 60.00th=[12387], 00:35:45.943 | 70.00th=[14353], 80.00th=[15664], 90.00th=[17957], 95.00th=[22676], 00:35:45.943 | 99.00th=[25822], 99.50th=[28443], 99.90th=[28443], 99.95th=[28443], 00:35:45.943 | 99.99th=[28443] 00:35:45.943 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:35:45.943 slat (nsec): min=1575, max=10133k, avg=114665.33, stdev=644837.85 00:35:45.943 clat (usec): min=1216, max=73848, avg=16124.37, stdev=14279.13 00:35:45.943 lat (usec): min=1230, max=73857, avg=16239.04, stdev=14367.83 00:35:45.943 clat percentiles (usec): 00:35:45.943 | 1.00th=[ 2114], 5.00th=[ 4555], 10.00th=[ 5342], 20.00th=[ 7635], 00:35:45.943 | 30.00th=[ 8717], 40.00th=[ 9372], 50.00th=[10814], 60.00th=[13566], 00:35:45.943 | 70.00th=[16188], 80.00th=[18744], 90.00th=[34866], 95.00th=[56361], 00:35:45.943 | 99.00th=[68682], 99.50th=[69731], 99.90th=[73925], 99.95th=[73925], 00:35:45.943 | 99.99th=[73925] 00:35:45.943 bw ( KiB/s): min=18032, max=18832, per=17.56%, avg=18432.00, stdev=565.69, samples=2 00:35:45.943 iops : min= 4508, max= 4708, avg=4608.00, stdev=141.42, samples=2 00:35:45.943 lat (usec) : 1000=0.01% 00:35:45.943 lat (msec) : 2=0.46%, 4=1.67%, 10=45.19%, 20=39.46%, 50=9.88% 00:35:45.943 lat (msec) : 100=3.33% 00:35:45.943 cpu : usr=3.29%, sys=4.68%, ctx=504, majf=0, minf=2 00:35:45.943 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:35:45.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:45.943 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:45.943 issued rwts: total=4474,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:45.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:45.943 00:35:45.943 Run status group 0 (all jobs): 00:35:45.943 READ: bw=99.0MiB/s (104MB/s), 17.4MiB/s-30.6MiB/s (18.2MB/s-32.0MB/s), io=99.5MiB (104MB), run=1002-1005msec 00:35:45.943 WRITE: bw=102MiB/s (107MB/s), 17.9MiB/s-31.9MiB/s (18.8MB/s-33.5MB/s), io=103MiB (108MB), run=1002-1005msec 00:35:45.943 00:35:45.943 Disk stats (read/write): 00:35:45.943 nvme0n1: ios=4784/5120, merge=0/0, ticks=40185/31859, in_queue=72044, util=82.36% 00:35:45.943 nvme0n2: ios=6181/6533, merge=0/0, ticks=33265/28774, in_queue=62039, util=97.31% 00:35:45.943 nvme0n3: ios=5556/5632, merge=0/0, ticks=37243/32840, in_queue=70083, util=96.04% 00:35:45.943 nvme0n4: ios=3108/3497, merge=0/0, ticks=37380/57837, in_queue=95217, util=99.21% 00:35:45.943 08:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:35:45.943 [global] 00:35:45.943 thread=1 00:35:45.943 invalidate=1 00:35:45.943 rw=randwrite 00:35:45.943 time_based=1 00:35:45.943 runtime=1 00:35:45.943 ioengine=libaio 00:35:45.943 direct=1 00:35:45.943 bs=4096 00:35:45.943 iodepth=128 00:35:45.943 norandommap=0 00:35:45.943 numjobs=1 00:35:45.943 00:35:45.943 verify_dump=1 00:35:45.943 verify_backlog=512 00:35:45.943 verify_state_save=0 00:35:45.943 do_verify=1 00:35:45.943 verify=crc32c-intel 00:35:45.943 [job0] 00:35:45.943 filename=/dev/nvme0n1 00:35:45.943 [job1] 00:35:45.943 filename=/dev/nvme0n2 00:35:45.943 [job2] 00:35:45.943 filename=/dev/nvme0n3 00:35:45.943 [job3] 00:35:45.943 filename=/dev/nvme0n4 00:35:45.943 Could not set queue depth (nvme0n1) 00:35:45.943 Could not set queue depth (nvme0n2) 00:35:45.943 Could not set queue depth (nvme0n3) 00:35:45.943 Could not set queue depth (nvme0n4) 00:35:46.207 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:46.207 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:46.207 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:46.207 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:46.207 fio-3.35 00:35:46.207 Starting 4 threads 00:35:47.623 00:35:47.623 job0: (groupid=0, jobs=1): err= 0: pid=2246381: Thu Nov 28 08:34:44 2024 00:35:47.623 read: IOPS=8643, BW=33.8MiB/s (35.4MB/s)(34.0MiB/1007msec) 00:35:47.623 slat (nsec): min=905, max=6893.1k, avg=59482.56, stdev=440953.93 00:35:47.623 clat (usec): min=2664, max=17257, avg=7746.35, stdev=2066.37 00:35:47.623 lat (usec): min=2668, max=18993, avg=7805.83, stdev=2094.09 00:35:47.623 clat percentiles (usec): 00:35:47.623 | 1.00th=[ 3818], 5.00th=[ 5211], 10.00th=[ 5669], 20.00th=[ 6194], 00:35:47.623 | 30.00th=[ 6456], 40.00th=[ 6849], 50.00th=[ 7242], 60.00th=[ 7832], 00:35:47.623 | 70.00th=[ 8455], 80.00th=[ 9241], 90.00th=[10683], 95.00th=[11994], 00:35:47.623 | 99.00th=[13698], 99.50th=[14484], 99.90th=[15926], 99.95th=[16450], 00:35:47.623 | 99.99th=[17171] 00:35:47.623 write: IOPS=9134, BW=35.7MiB/s (37.4MB/s)(35.9MiB/1007msec); 0 zone resets 00:35:47.623 slat (nsec): min=1559, max=6727.8k, avg=48071.19, stdev=307029.36 00:35:47.623 clat (usec): min=1146, max=16397, avg=6555.19, stdev=1610.52 00:35:47.623 lat (usec): min=1157, max=16401, avg=6603.26, stdev=1618.25 00:35:47.623 clat percentiles (usec): 00:35:47.623 | 1.00th=[ 2966], 5.00th=[ 3982], 10.00th=[ 4359], 20.00th=[ 5342], 00:35:47.623 | 30.00th=[ 5800], 40.00th=[ 6456], 50.00th=[ 6718], 60.00th=[ 6915], 00:35:47.623 | 70.00th=[ 7111], 80.00th=[ 7504], 90.00th=[ 8225], 95.00th=[ 9503], 00:35:47.623 | 99.00th=[11338], 99.50th=[11600], 99.90th=[12911], 99.95th=[14353], 00:35:47.623 | 99.99th=[16450] 00:35:47.623 bw ( KiB/s): min=35704, max=36864, per=33.89%, avg=36284.00, stdev=820.24, samples=2 00:35:47.623 iops : min= 8926, max= 9216, avg=9071.00, stdev=205.06, samples=2 00:35:47.623 lat (msec) : 2=0.09%, 4=3.30%, 10=88.19%, 20=8.42% 00:35:47.623 cpu : usr=4.47%, sys=8.35%, ctx=789, majf=0, minf=2 00:35:47.623 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:35:47.623 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.623 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:47.623 issued rwts: total=8704,9198,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.623 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:47.623 job1: (groupid=0, jobs=1): err= 0: pid=2246409: Thu Nov 28 08:34:44 2024 00:35:47.623 read: IOPS=4060, BW=15.9MiB/s (16.6MB/s)(16.1MiB/1012msec) 00:35:47.623 slat (nsec): min=995, max=23151k, avg=102545.66, stdev=957514.39 00:35:47.623 clat (usec): min=4099, max=42123, avg=13888.17, stdev=6505.20 00:35:47.623 lat (usec): min=4103, max=42423, avg=13990.72, stdev=6569.52 00:35:47.623 clat percentiles (usec): 00:35:47.623 | 1.00th=[ 5080], 5.00th=[ 6849], 10.00th=[ 7898], 20.00th=[ 8291], 00:35:47.623 | 30.00th=[ 9241], 40.00th=[10421], 50.00th=[13173], 60.00th=[14484], 00:35:47.623 | 70.00th=[15664], 80.00th=[18220], 90.00th=[23987], 95.00th=[27395], 00:35:47.623 | 99.00th=[37487], 99.50th=[37487], 99.90th=[42206], 99.95th=[42206], 00:35:47.623 | 99.99th=[42206] 00:35:47.623 write: IOPS=4553, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1012msec); 0 zone resets 00:35:47.623 slat (nsec): min=1693, max=12675k, avg=115536.60, stdev=661843.23 00:35:47.623 clat (usec): min=694, max=75896, avg=15462.95, stdev=13183.92 00:35:47.623 lat (usec): min=703, max=75906, avg=15578.49, stdev=13253.51 00:35:47.623 clat percentiles (usec): 00:35:47.623 | 1.00th=[ 1106], 5.00th=[ 3982], 10.00th=[ 5669], 20.00th=[ 7898], 00:35:47.623 | 30.00th=[ 8717], 40.00th=[ 9372], 50.00th=[11994], 60.00th=[14484], 00:35:47.623 | 70.00th=[15008], 80.00th=[17957], 90.00th=[30278], 95.00th=[44827], 00:35:47.623 | 99.00th=[69731], 99.50th=[73925], 99.90th=[76022], 99.95th=[76022], 00:35:47.623 | 99.99th=[76022] 00:35:47.623 bw ( KiB/s): min=16752, max=19200, per=16.79%, avg=17976.00, stdev=1731.00, samples=2 00:35:47.623 iops : min= 4188, max= 4800, avg=4494.00, stdev=432.75, samples=2 00:35:47.623 lat (usec) : 750=0.01%, 1000=0.20% 00:35:47.623 lat (msec) : 2=1.24%, 4=1.20%, 10=38.22%, 20=42.82%, 50=14.03% 00:35:47.623 lat (msec) : 100=2.27% 00:35:47.623 cpu : usr=2.67%, sys=5.14%, ctx=462, majf=0, minf=1 00:35:47.623 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:35:47.623 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.623 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:47.623 issued rwts: total=4109,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.623 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:47.623 job2: (groupid=0, jobs=1): err= 0: pid=2246438: Thu Nov 28 08:34:44 2024 00:35:47.623 read: IOPS=7462, BW=29.2MiB/s (30.6MB/s)(29.3MiB/1005msec) 00:35:47.623 slat (nsec): min=1010, max=9214.4k, avg=68549.65, stdev=553592.38 00:35:47.623 clat (usec): min=1405, max=19176, avg=8897.14, stdev=2305.99 00:35:47.623 lat (usec): min=3451, max=19201, avg=8965.69, stdev=2340.19 00:35:47.623 clat percentiles (usec): 00:35:47.623 | 1.00th=[ 4359], 5.00th=[ 6194], 10.00th=[ 6783], 20.00th=[ 7111], 00:35:47.623 | 30.00th=[ 7570], 40.00th=[ 7898], 50.00th=[ 8160], 60.00th=[ 8586], 00:35:47.623 | 70.00th=[ 9634], 80.00th=[10552], 90.00th=[12780], 95.00th=[13304], 00:35:47.623 | 99.00th=[15270], 99.50th=[16057], 99.90th=[18220], 99.95th=[18220], 00:35:47.623 | 99.99th=[19268] 00:35:47.623 write: IOPS=7641, BW=29.9MiB/s (31.3MB/s)(30.0MiB/1005msec); 0 zone resets 00:35:47.623 slat (nsec): min=1578, max=11192k, avg=58547.12, stdev=432091.33 00:35:47.623 clat (usec): min=1464, max=20273, avg=7866.86, stdev=2374.60 00:35:47.623 lat (usec): min=1686, max=20306, avg=7925.41, stdev=2385.47 00:35:47.623 clat percentiles (usec): 00:35:47.623 | 1.00th=[ 3359], 5.00th=[ 4686], 10.00th=[ 5014], 20.00th=[ 6063], 00:35:47.623 | 30.00th=[ 6849], 40.00th=[ 7439], 50.00th=[ 7963], 60.00th=[ 8094], 00:35:47.623 | 70.00th=[ 8356], 80.00th=[ 8979], 90.00th=[10421], 95.00th=[11863], 00:35:47.623 | 99.00th=[15139], 99.50th=[20055], 99.90th=[20055], 99.95th=[20055], 00:35:47.623 | 99.99th=[20317] 00:35:47.623 bw ( KiB/s): min=30440, max=31000, per=28.70%, avg=30720.00, stdev=395.98, samples=2 00:35:47.623 iops : min= 7610, max= 7750, avg=7680.00, stdev=98.99, samples=2 00:35:47.623 lat (msec) : 2=0.09%, 4=1.27%, 10=79.94%, 20=18.61%, 50=0.09% 00:35:47.623 cpu : usr=5.68%, sys=7.17%, ctx=480, majf=0, minf=2 00:35:47.623 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:35:47.623 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.623 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:47.623 issued rwts: total=7500,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.623 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:47.623 job3: (groupid=0, jobs=1): err= 0: pid=2246450: Thu Nov 28 08:34:44 2024 00:35:47.623 read: IOPS=5059, BW=19.8MiB/s (20.7MB/s)(20.0MiB/1012msec) 00:35:47.623 slat (nsec): min=954, max=16190k, avg=91642.00, stdev=708466.69 00:35:47.623 clat (usec): min=1224, max=38617, avg=12039.44, stdev=5566.77 00:35:47.623 lat (usec): min=1235, max=38628, avg=12131.08, stdev=5611.13 00:35:47.623 clat percentiles (usec): 00:35:47.623 | 1.00th=[ 2507], 5.00th=[ 5604], 10.00th=[ 7111], 20.00th=[ 8455], 00:35:47.623 | 30.00th=[ 9241], 40.00th=[10028], 50.00th=[10814], 60.00th=[11731], 00:35:47.624 | 70.00th=[12649], 80.00th=[14746], 90.00th=[19006], 95.00th=[21890], 00:35:47.624 | 99.00th=[34866], 99.50th=[35914], 99.90th=[38011], 99.95th=[38536], 00:35:47.624 | 99.99th=[38536] 00:35:47.624 write: IOPS=5531, BW=21.6MiB/s (22.7MB/s)(21.9MiB/1012msec); 0 zone resets 00:35:47.624 slat (nsec): min=1647, max=13962k, avg=86811.66, stdev=589753.55 00:35:47.624 clat (usec): min=1302, max=55187, avg=11915.86, stdev=6589.58 00:35:47.624 lat (usec): min=1313, max=55218, avg=12002.67, stdev=6632.34 00:35:47.624 clat percentiles (usec): 00:35:47.624 | 1.00th=[ 3654], 5.00th=[ 5932], 10.00th=[ 7177], 20.00th=[ 8225], 00:35:47.624 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[10290], 60.00th=[11207], 00:35:47.624 | 70.00th=[13304], 80.00th=[14877], 90.00th=[15664], 95.00th=[20317], 00:35:47.624 | 99.00th=[48497], 99.50th=[51643], 99.90th=[53740], 99.95th=[53740], 00:35:47.624 | 99.99th=[55313] 00:35:47.624 bw ( KiB/s): min=20480, max=23288, per=20.44%, avg=21884.00, stdev=1985.56, samples=2 00:35:47.624 iops : min= 5120, max= 5822, avg=5471.00, stdev=496.39, samples=2 00:35:47.624 lat (msec) : 2=0.35%, 4=1.73%, 10=41.70%, 20=50.51%, 50=5.21% 00:35:47.624 lat (msec) : 100=0.51% 00:35:47.624 cpu : usr=3.07%, sys=5.74%, ctx=506, majf=0, minf=1 00:35:47.624 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:35:47.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:47.624 issued rwts: total=5120,5598,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.624 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:47.624 00:35:47.624 Run status group 0 (all jobs): 00:35:47.624 READ: bw=98.2MiB/s (103MB/s), 15.9MiB/s-33.8MiB/s (16.6MB/s-35.4MB/s), io=99.3MiB (104MB), run=1005-1012msec 00:35:47.624 WRITE: bw=105MiB/s (110MB/s), 17.8MiB/s-35.7MiB/s (18.7MB/s-37.4MB/s), io=106MiB (111MB), run=1005-1012msec 00:35:47.624 00:35:47.624 Disk stats (read/write): 00:35:47.624 nvme0n1: ios=6706/7167, merge=0/0, ticks=48969/44861, in_queue=93830, util=82.16% 00:35:47.624 nvme0n2: ios=3106/3567, merge=0/0, ticks=40555/55182, in_queue=95737, util=96.79% 00:35:47.624 nvme0n3: ios=5632/6085, merge=0/0, ticks=47546/45950, in_queue=93496, util=86.50% 00:35:47.624 nvme0n4: ios=4081/4096, merge=0/0, ticks=41273/43926, in_queue=85199, util=88.76% 00:35:47.624 08:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:35:47.624 08:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2246645 00:35:47.624 08:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:35:47.624 08:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:35:47.624 [global] 00:35:47.624 thread=1 00:35:47.624 invalidate=1 00:35:47.624 rw=read 00:35:47.624 time_based=1 00:35:47.624 runtime=10 00:35:47.624 ioengine=libaio 00:35:47.624 direct=1 00:35:47.624 bs=4096 00:35:47.624 iodepth=1 00:35:47.624 norandommap=1 00:35:47.624 numjobs=1 00:35:47.624 00:35:47.624 [job0] 00:35:47.624 filename=/dev/nvme0n1 00:35:47.624 [job1] 00:35:47.624 filename=/dev/nvme0n2 00:35:47.624 [job2] 00:35:47.624 filename=/dev/nvme0n3 00:35:47.624 [job3] 00:35:47.624 filename=/dev/nvme0n4 00:35:47.624 Could not set queue depth (nvme0n1) 00:35:47.624 Could not set queue depth (nvme0n2) 00:35:47.624 Could not set queue depth (nvme0n3) 00:35:47.624 Could not set queue depth (nvme0n4) 00:35:47.895 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:47.895 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:47.895 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:47.895 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:47.895 fio-3.35 00:35:47.895 Starting 4 threads 00:35:50.438 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:35:50.699 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=8839168, buflen=4096 00:35:50.699 fio: pid=2246930, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:50.699 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:35:50.699 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:50.699 08:34:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:35:50.699 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=4665344, buflen=4096 00:35:50.699 fio: pid=2246924, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:50.961 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:50.961 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:35:50.961 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=4710400, buflen=4096 00:35:50.961 fio: pid=2246891, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:51.222 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:51.222 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:35:51.222 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=311296, buflen=4096 00:35:51.222 fio: pid=2246907, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:51.222 00:35:51.222 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2246891: Thu Nov 28 08:34:48 2024 00:35:51.222 read: IOPS=384, BW=1536KiB/s (1573kB/s)(4600KiB/2995msec) 00:35:51.222 slat (usec): min=6, max=12738, avg=34.42, stdev=374.89 00:35:51.222 clat (usec): min=349, max=41806, avg=2545.80, stdev=8372.39 00:35:51.222 lat (usec): min=357, max=54017, avg=2580.23, stdev=8432.80 00:35:51.222 clat percentiles (usec): 00:35:51.222 | 1.00th=[ 502], 5.00th=[ 562], 10.00th=[ 611], 20.00th=[ 660], 00:35:51.222 | 30.00th=[ 685], 40.00th=[ 717], 50.00th=[ 742], 60.00th=[ 758], 00:35:51.222 | 70.00th=[ 775], 80.00th=[ 799], 90.00th=[ 848], 95.00th=[ 938], 00:35:51.222 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:35:51.222 | 99.99th=[41681] 00:35:51.222 bw ( KiB/s): min= 96, max= 5352, per=31.80%, avg=1819.20, stdev=2452.96, samples=5 00:35:51.222 iops : min= 24, max= 1338, avg=454.80, stdev=613.24, samples=5 00:35:51.223 lat (usec) : 500=0.96%, 750=55.52%, 1000=38.84% 00:35:51.223 lat (msec) : 4=0.09%, 50=4.52% 00:35:51.223 cpu : usr=0.43%, sys=1.00%, ctx=1154, majf=0, minf=2 00:35:51.223 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:51.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.223 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.223 issued rwts: total=1151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.223 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:51.223 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2246907: Thu Nov 28 08:34:48 2024 00:35:51.223 read: IOPS=24, BW=96.1KiB/s (98.4kB/s)(304KiB/3163msec) 00:35:51.223 slat (usec): min=20, max=8682, avg=213.03, stdev=1170.52 00:35:51.223 clat (usec): min=931, max=42126, avg=41115.56, stdev=4693.04 00:35:51.223 lat (usec): min=971, max=50026, avg=41331.03, stdev=4845.53 00:35:51.223 clat percentiles (usec): 00:35:51.223 | 1.00th=[ 930], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:35:51.223 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:35:51.223 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:51.223 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:51.223 | 99.99th=[42206] 00:35:51.223 bw ( KiB/s): min= 96, max= 96, per=1.68%, avg=96.00, stdev= 0.00, samples=6 00:35:51.223 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=6 00:35:51.223 lat (usec) : 1000=1.30% 00:35:51.223 lat (msec) : 50=97.40% 00:35:51.223 cpu : usr=0.16%, sys=0.00%, ctx=80, majf=0, minf=2 00:35:51.223 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:51.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.223 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.223 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.223 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:51.223 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2246924: Thu Nov 28 08:34:48 2024 00:35:51.223 read: IOPS=407, BW=1628KiB/s (1667kB/s)(4556KiB/2799msec) 00:35:51.223 slat (usec): min=6, max=3704, avg=27.15, stdev=109.23 00:35:51.223 clat (usec): min=276, max=42023, avg=2404.95, stdev=8033.05 00:35:51.223 lat (usec): min=284, max=45002, avg=2432.10, stdev=8049.74 00:35:51.223 clat percentiles (usec): 00:35:51.223 | 1.00th=[ 465], 5.00th=[ 545], 10.00th=[ 586], 20.00th=[ 644], 00:35:51.223 | 30.00th=[ 685], 40.00th=[ 717], 50.00th=[ 758], 60.00th=[ 799], 00:35:51.223 | 70.00th=[ 824], 80.00th=[ 848], 90.00th=[ 898], 95.00th=[ 963], 00:35:51.223 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:35:51.223 | 99.99th=[42206] 00:35:51.223 bw ( KiB/s): min= 248, max= 2544, per=30.12%, avg=1723.20, stdev=917.78, samples=5 00:35:51.223 iops : min= 62, max= 636, avg=430.80, stdev=229.45, samples=5 00:35:51.223 lat (usec) : 500=2.19%, 750=46.75%, 1000=46.32% 00:35:51.223 lat (msec) : 2=0.53%, 50=4.12% 00:35:51.223 cpu : usr=0.50%, sys=1.04%, ctx=1141, majf=0, minf=1 00:35:51.223 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:51.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.223 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.223 issued rwts: total=1140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.223 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:51.223 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2246930: Thu Nov 28 08:34:48 2024 00:35:51.223 read: IOPS=833, BW=3332KiB/s (3411kB/s)(8632KiB/2591msec) 00:35:51.223 slat (nsec): min=6956, max=60898, avg=24880.71, stdev=4767.55 00:35:51.223 clat (usec): min=402, max=41956, avg=1165.03, stdev=3980.27 00:35:51.223 lat (usec): min=427, max=41982, avg=1189.91, stdev=3980.32 00:35:51.223 clat percentiles (usec): 00:35:51.223 | 1.00th=[ 506], 5.00th=[ 545], 10.00th=[ 562], 20.00th=[ 635], 00:35:51.223 | 30.00th=[ 693], 40.00th=[ 758], 50.00th=[ 791], 60.00th=[ 848], 00:35:51.223 | 70.00th=[ 881], 80.00th=[ 898], 90.00th=[ 930], 95.00th=[ 955], 00:35:51.223 | 99.00th=[ 1090], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:35:51.223 | 99.99th=[42206] 00:35:51.223 bw ( KiB/s): min= 2080, max= 4880, per=59.02%, avg=3376.00, stdev=1030.31, samples=5 00:35:51.223 iops : min= 520, max= 1220, avg=844.00, stdev=257.58, samples=5 00:35:51.223 lat (usec) : 500=0.97%, 750=37.56%, 1000=59.61% 00:35:51.223 lat (msec) : 2=0.83%, 50=0.97% 00:35:51.223 cpu : usr=1.08%, sys=2.20%, ctx=2159, majf=0, minf=2 00:35:51.223 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:51.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.223 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.223 issued rwts: total=2159,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.223 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:51.223 00:35:51.223 Run status group 0 (all jobs): 00:35:51.223 READ: bw=5720KiB/s (5857kB/s), 96.1KiB/s-3332KiB/s (98.4kB/s-3411kB/s), io=17.7MiB (18.5MB), run=2591-3163msec 00:35:51.223 00:35:51.223 Disk stats (read/write): 00:35:51.223 nvme0n1: ios=1146/0, merge=0/0, ticks=2755/0, in_queue=2755, util=94.36% 00:35:51.223 nvme0n2: ios=74/0, merge=0/0, ticks=3045/0, in_queue=3045, util=95.26% 00:35:51.223 nvme0n3: ios=1134/0, merge=0/0, ticks=2506/0, in_queue=2506, util=96.03% 00:35:51.223 nvme0n4: ios=2158/0, merge=0/0, ticks=2454/0, in_queue=2454, util=96.42% 00:35:51.223 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:51.223 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:35:51.484 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:51.484 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:35:51.745 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:51.745 08:34:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:35:52.005 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:52.005 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:35:52.005 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:35:52.005 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2246645 00:35:52.005 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:35:52.005 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:52.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:52.265 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:52.265 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:35:52.265 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:52.265 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:52.265 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:52.265 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:52.265 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:35:52.265 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:35:52.265 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:35:52.265 nvmf hotplug test: fio failed as expected 00:35:52.265 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:52.265 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:35:52.265 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:35:52.265 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:35:52.525 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:35:52.525 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:35:52.525 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:52.525 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:35:52.525 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:52.525 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:35:52.525 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:52.525 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:52.525 rmmod nvme_tcp 00:35:52.525 rmmod nvme_fabrics 00:35:52.525 rmmod nvme_keyring 00:35:52.525 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:52.525 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:35:52.525 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:35:52.525 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2243474 ']' 00:35:52.525 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2243474 00:35:52.525 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2243474 ']' 00:35:52.525 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2243474 00:35:52.525 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:35:52.525 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:52.525 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2243474 00:35:52.525 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:52.525 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:52.525 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2243474' 00:35:52.525 killing process with pid 2243474 00:35:52.525 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2243474 00:35:52.525 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2243474 00:35:52.786 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:52.786 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:52.786 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:52.786 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:35:52.786 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:35:52.786 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:52.786 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:35:52.786 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:52.786 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:52.786 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:52.786 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:52.786 08:34:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:54.698 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:54.698 00:35:54.698 real 0m28.451s 00:35:54.698 user 2m15.362s 00:35:54.698 sys 0m12.215s 00:35:54.698 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:54.698 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:54.698 ************************************ 00:35:54.698 END TEST nvmf_fio_target 00:35:54.698 ************************************ 00:35:54.698 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:54.698 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:54.698 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:54.698 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:54.698 ************************************ 00:35:54.698 START TEST nvmf_bdevio 00:35:54.698 ************************************ 00:35:54.698 08:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:54.960 * Looking for test storage... 00:35:54.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:54.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:54.960 --rc genhtml_branch_coverage=1 00:35:54.960 --rc genhtml_function_coverage=1 00:35:54.960 --rc genhtml_legend=1 00:35:54.960 --rc geninfo_all_blocks=1 00:35:54.960 --rc geninfo_unexecuted_blocks=1 00:35:54.960 00:35:54.960 ' 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:54.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:54.960 --rc genhtml_branch_coverage=1 00:35:54.960 --rc genhtml_function_coverage=1 00:35:54.960 --rc genhtml_legend=1 00:35:54.960 --rc geninfo_all_blocks=1 00:35:54.960 --rc geninfo_unexecuted_blocks=1 00:35:54.960 00:35:54.960 ' 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:54.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:54.960 --rc genhtml_branch_coverage=1 00:35:54.960 --rc genhtml_function_coverage=1 00:35:54.960 --rc genhtml_legend=1 00:35:54.960 --rc geninfo_all_blocks=1 00:35:54.960 --rc geninfo_unexecuted_blocks=1 00:35:54.960 00:35:54.960 ' 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:54.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:54.960 --rc genhtml_branch_coverage=1 00:35:54.960 --rc genhtml_function_coverage=1 00:35:54.960 --rc genhtml_legend=1 00:35:54.960 --rc geninfo_all_blocks=1 00:35:54.960 --rc geninfo_unexecuted_blocks=1 00:35:54.960 00:35:54.960 ' 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:54.960 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:54.961 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:54.961 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:54.961 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:54.961 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:35:54.961 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:54.961 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:35:54.961 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:54.961 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:54.961 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:54.961 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:54.961 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:54.961 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:54.961 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:54.961 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:54.961 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:54.961 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:54.961 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:54.961 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:54.961 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:35:54.961 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:54.961 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:54.961 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:54.961 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:54.961 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:54.961 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:54.961 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:54.961 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:54.961 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:54.961 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:54.961 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:35:54.961 08:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:03.108 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:03.108 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:03.108 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:03.109 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:03.109 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:03.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:03.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:36:03.109 00:36:03.109 --- 10.0.0.2 ping statistics --- 00:36:03.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:03.109 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:03.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:03.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:36:03.109 00:36:03.109 --- 10.0.0.1 ping statistics --- 00:36:03.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:03.109 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:03.109 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:03.110 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:03.110 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2251911 00:36:03.110 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2251911 00:36:03.110 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:36:03.110 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2251911 ']' 00:36:03.110 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:03.110 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:03.110 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:03.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:03.110 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:03.110 08:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:03.110 [2024-11-28 08:34:59.650233] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:03.110 [2024-11-28 08:34:59.651203] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:36:03.110 [2024-11-28 08:34:59.651241] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:03.110 [2024-11-28 08:34:59.747438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:03.110 [2024-11-28 08:34:59.784054] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:03.110 [2024-11-28 08:34:59.784088] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:03.110 [2024-11-28 08:34:59.784096] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:03.110 [2024-11-28 08:34:59.784103] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:03.110 [2024-11-28 08:34:59.784109] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:03.110 [2024-11-28 08:34:59.785651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:03.110 [2024-11-28 08:34:59.785802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:36:03.110 [2024-11-28 08:34:59.785996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:03.110 [2024-11-28 08:34:59.785997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:36:03.110 [2024-11-28 08:34:59.842576] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:03.110 [2024-11-28 08:34:59.843873] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:03.110 [2024-11-28 08:34:59.843992] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:03.110 [2024-11-28 08:34:59.844653] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:03.110 [2024-11-28 08:34:59.844710] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:03.423 08:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:03.423 08:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:36:03.423 08:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:03.423 08:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:03.423 08:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:03.423 08:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:03.423 08:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:03.423 08:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.423 08:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:03.423 [2024-11-28 08:35:00.478752] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:03.423 08:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.423 08:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:03.423 08:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.423 08:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:03.423 Malloc0 00:36:03.423 08:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.423 08:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:03.423 08:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.423 08:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:03.423 08:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.423 08:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:03.423 08:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.423 08:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:03.423 08:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.423 08:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:03.423 08:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.423 08:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:03.423 [2024-11-28 08:35:00.571015] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:03.423 08:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.423 08:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:36:03.423 08:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:36:03.423 08:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:36:03.423 08:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:36:03.423 08:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:03.423 08:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:03.423 { 00:36:03.423 "params": { 00:36:03.423 "name": "Nvme$subsystem", 00:36:03.423 "trtype": "$TEST_TRANSPORT", 00:36:03.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:03.423 "adrfam": "ipv4", 00:36:03.423 "trsvcid": "$NVMF_PORT", 00:36:03.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:03.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:03.423 "hdgst": ${hdgst:-false}, 00:36:03.423 "ddgst": ${ddgst:-false} 00:36:03.423 }, 00:36:03.423 "method": "bdev_nvme_attach_controller" 00:36:03.423 } 00:36:03.423 EOF 00:36:03.423 )") 00:36:03.423 08:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:36:03.423 08:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:36:03.423 08:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:36:03.423 08:35:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:03.423 "params": { 00:36:03.423 "name": "Nvme1", 00:36:03.423 "trtype": "tcp", 00:36:03.423 "traddr": "10.0.0.2", 00:36:03.423 "adrfam": "ipv4", 00:36:03.423 "trsvcid": "4420", 00:36:03.423 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:03.423 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:03.423 "hdgst": false, 00:36:03.423 "ddgst": false 00:36:03.423 }, 00:36:03.423 "method": "bdev_nvme_attach_controller" 00:36:03.423 }' 00:36:03.423 [2024-11-28 08:35:00.624300] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:36:03.423 [2024-11-28 08:35:00.624355] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2252211 ] 00:36:03.737 [2024-11-28 08:35:00.712855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:03.737 [2024-11-28 08:35:00.752465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:03.737 [2024-11-28 08:35:00.752615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:03.737 [2024-11-28 08:35:00.752615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:03.737 I/O targets: 00:36:03.737 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:36:03.737 00:36:03.737 00:36:03.737 CUnit - A unit testing framework for C - Version 2.1-3 00:36:03.737 http://cunit.sourceforge.net/ 00:36:03.737 00:36:03.737 00:36:03.737 Suite: bdevio tests on: Nvme1n1 00:36:03.737 Test: blockdev write read block ...passed 00:36:03.737 Test: blockdev write zeroes read block ...passed 00:36:03.737 Test: blockdev write zeroes read no split ...passed 00:36:03.737 Test: blockdev write zeroes read split ...passed 00:36:04.002 Test: blockdev write zeroes read split partial ...passed 00:36:04.002 Test: blockdev reset ...[2024-11-28 08:35:01.024040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:36:04.002 [2024-11-28 08:35:01.024142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b5970 (9): Bad file descriptor 00:36:04.002 [2024-11-28 08:35:01.073316] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:36:04.002 passed 00:36:04.002 Test: blockdev write read 8 blocks ...passed 00:36:04.002 Test: blockdev write read size > 128k ...passed 00:36:04.002 Test: blockdev write read invalid size ...passed 00:36:04.002 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:36:04.002 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:36:04.002 Test: blockdev write read max offset ...passed 00:36:04.002 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:36:04.002 Test: blockdev writev readv 8 blocks ...passed 00:36:04.002 Test: blockdev writev readv 30 x 1block ...passed 00:36:04.264 Test: blockdev writev readv block ...passed 00:36:04.264 Test: blockdev writev readv size > 128k ...passed 00:36:04.264 Test: blockdev writev readv size > 128k in two iovs ...passed 00:36:04.264 Test: blockdev comparev and writev ...[2024-11-28 08:35:01.298934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:04.264 [2024-11-28 08:35:01.298983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:04.264 [2024-11-28 08:35:01.299000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:04.264 [2024-11-28 08:35:01.299009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:04.264 [2024-11-28 08:35:01.299636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:04.264 [2024-11-28 08:35:01.299649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:04.264 [2024-11-28 08:35:01.299663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:04.264 [2024-11-28 08:35:01.299672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:04.264 [2024-11-28 08:35:01.300321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:04.264 [2024-11-28 08:35:01.300333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:04.264 [2024-11-28 08:35:01.300348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:04.264 [2024-11-28 08:35:01.300364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:04.264 [2024-11-28 08:35:01.301001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:04.264 [2024-11-28 08:35:01.301012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:04.264 [2024-11-28 08:35:01.301026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:04.264 [2024-11-28 08:35:01.301034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:04.264 passed 00:36:04.264 Test: blockdev nvme passthru rw ...passed 00:36:04.264 Test: blockdev nvme passthru vendor specific ...[2024-11-28 08:35:01.386109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:04.264 [2024-11-28 08:35:01.386126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:04.264 [2024-11-28 08:35:01.386519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:04.264 [2024-11-28 08:35:01.386532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:04.264 [2024-11-28 08:35:01.386948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:04.264 [2024-11-28 08:35:01.386959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:04.264 [2024-11-28 08:35:01.387350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:04.264 [2024-11-28 08:35:01.387361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:04.264 passed 00:36:04.264 Test: blockdev nvme admin passthru ...passed 00:36:04.264 Test: blockdev copy ...passed 00:36:04.264 00:36:04.264 Run Summary: Type Total Ran Passed Failed Inactive 00:36:04.264 suites 1 1 n/a 0 0 00:36:04.264 tests 23 23 23 0 0 00:36:04.264 asserts 152 152 152 0 n/a 00:36:04.264 00:36:04.264 Elapsed time = 1.116 seconds 00:36:04.525 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:04.525 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.525 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:04.525 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.525 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:36:04.526 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:36:04.526 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:04.526 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:36:04.526 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:04.526 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:36:04.526 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:04.526 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:04.526 rmmod nvme_tcp 00:36:04.526 rmmod nvme_fabrics 00:36:04.526 rmmod nvme_keyring 00:36:04.526 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:04.526 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:36:04.526 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:36:04.526 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2251911 ']' 00:36:04.526 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2251911 00:36:04.526 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2251911 ']' 00:36:04.526 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2251911 00:36:04.526 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:36:04.526 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:04.526 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2251911 00:36:04.526 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:36:04.526 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:36:04.526 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2251911' 00:36:04.526 killing process with pid 2251911 00:36:04.526 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2251911 00:36:04.526 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2251911 00:36:04.787 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:04.787 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:04.787 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:04.787 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:36:04.787 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:36:04.787 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:04.787 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:36:04.787 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:04.787 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:04.787 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:04.787 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:04.787 08:35:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:06.702 08:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:06.963 00:36:06.963 real 0m12.015s 00:36:06.963 user 0m9.074s 00:36:06.963 sys 0m6.201s 00:36:06.963 08:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:06.963 08:35:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:06.963 ************************************ 00:36:06.963 END TEST nvmf_bdevio 00:36:06.964 ************************************ 00:36:06.964 08:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:36:06.964 00:36:06.964 real 5m1.059s 00:36:06.964 user 10m14.424s 00:36:06.964 sys 2m5.562s 00:36:06.964 08:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:06.964 08:35:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:06.964 ************************************ 00:36:06.964 END TEST nvmf_target_core_interrupt_mode 00:36:06.964 ************************************ 00:36:06.964 08:35:04 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:36:06.964 08:35:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:06.964 08:35:04 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:06.964 08:35:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:06.964 ************************************ 00:36:06.964 START TEST nvmf_interrupt 00:36:06.964 ************************************ 00:36:06.964 08:35:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:36:06.964 * Looking for test storage... 00:36:06.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:06.964 08:35:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:06.964 08:35:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:36:06.964 08:35:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:07.226 08:35:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:07.226 08:35:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:07.226 08:35:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:07.226 08:35:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:07.226 08:35:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:36:07.226 08:35:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:36:07.226 08:35:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:36:07.226 08:35:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:36:07.226 08:35:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:36:07.226 08:35:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:36:07.226 08:35:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:36:07.226 08:35:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:07.226 08:35:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:36:07.226 08:35:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:36:07.226 08:35:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:07.226 08:35:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:07.226 08:35:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:36:07.226 08:35:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:36:07.226 08:35:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:07.226 08:35:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:07.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:07.227 --rc genhtml_branch_coverage=1 00:36:07.227 --rc genhtml_function_coverage=1 00:36:07.227 --rc genhtml_legend=1 00:36:07.227 --rc geninfo_all_blocks=1 00:36:07.227 --rc geninfo_unexecuted_blocks=1 00:36:07.227 00:36:07.227 ' 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:07.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:07.227 --rc genhtml_branch_coverage=1 00:36:07.227 --rc genhtml_function_coverage=1 00:36:07.227 --rc genhtml_legend=1 00:36:07.227 --rc geninfo_all_blocks=1 00:36:07.227 --rc geninfo_unexecuted_blocks=1 00:36:07.227 00:36:07.227 ' 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:07.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:07.227 --rc genhtml_branch_coverage=1 00:36:07.227 --rc genhtml_function_coverage=1 00:36:07.227 --rc genhtml_legend=1 00:36:07.227 --rc geninfo_all_blocks=1 00:36:07.227 --rc geninfo_unexecuted_blocks=1 00:36:07.227 00:36:07.227 ' 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:07.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:07.227 --rc genhtml_branch_coverage=1 00:36:07.227 --rc genhtml_function_coverage=1 00:36:07.227 --rc genhtml_legend=1 00:36:07.227 --rc geninfo_all_blocks=1 00:36:07.227 --rc geninfo_unexecuted_blocks=1 00:36:07.227 00:36:07.227 ' 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:36:07.227 08:35:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:15.373 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:15.373 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:15.373 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:15.373 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:15.373 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:15.373 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:36:15.373 00:36:15.373 --- 10.0.0.2 ping statistics --- 00:36:15.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:15.373 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:15.373 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:15.373 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:36:15.373 00:36:15.373 --- 10.0.0.1 ping statistics --- 00:36:15.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:15.373 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2256571 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2256571 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2256571 ']' 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:15.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:15.373 08:35:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:15.373 [2024-11-28 08:35:11.943635] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:15.373 [2024-11-28 08:35:11.944759] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:36:15.373 [2024-11-28 08:35:11.944808] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:15.373 [2024-11-28 08:35:12.042375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:15.373 [2024-11-28 08:35:12.093389] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:15.373 [2024-11-28 08:35:12.093439] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:15.373 [2024-11-28 08:35:12.093448] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:15.373 [2024-11-28 08:35:12.093455] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:15.373 [2024-11-28 08:35:12.093462] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:15.373 [2024-11-28 08:35:12.095072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:15.373 [2024-11-28 08:35:12.095076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:15.373 [2024-11-28 08:35:12.177088] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:15.373 [2024-11-28 08:35:12.177681] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:15.373 [2024-11-28 08:35:12.177978] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:15.634 08:35:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:15.634 08:35:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:36:15.634 08:35:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:15.634 08:35:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:15.634 08:35:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:15.634 08:35:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:15.634 08:35:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:36:15.634 08:35:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:36:15.634 08:35:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:36:15.634 08:35:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:36:15.634 5000+0 records in 00:36:15.634 5000+0 records out 00:36:15.634 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0184546 s, 555 MB/s 00:36:15.634 08:35:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:36:15.634 08:35:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.634 08:35:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:15.634 AIO0 00:36:15.634 08:35:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.634 08:35:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:36:15.634 08:35:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.634 08:35:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:15.634 [2024-11-28 08:35:12.888244] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:15.634 08:35:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.634 08:35:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:15.634 08:35:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.634 08:35:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:15.634 08:35:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.634 08:35:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:36:15.634 08:35:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.634 08:35:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:15.895 08:35:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.895 08:35:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:15.895 08:35:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.895 08:35:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:15.895 [2024-11-28 08:35:12.932657] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:15.895 08:35:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.895 08:35:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:36:15.895 08:35:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2256571 0 00:36:15.895 08:35:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2256571 0 idle 00:36:15.895 08:35:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2256571 00:36:15.895 08:35:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:15.895 08:35:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:15.895 08:35:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:15.895 08:35:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:15.895 08:35:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:15.895 08:35:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:15.895 08:35:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:15.895 08:35:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:15.895 08:35:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:15.895 08:35:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2256571 -w 256 00:36:15.895 08:35:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:15.895 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2256571 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.31 reactor_0' 00:36:15.896 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2256571 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.31 reactor_0 00:36:15.896 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:15.896 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:15.896 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:15.896 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:15.896 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:15.896 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:15.896 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:15.896 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:15.896 08:35:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:36:15.896 08:35:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2256571 1 00:36:15.896 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2256571 1 idle 00:36:15.896 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2256571 00:36:15.896 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:15.896 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:15.896 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:15.896 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:15.896 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:15.896 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:15.896 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:15.896 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:15.896 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:15.896 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2256571 -w 256 00:36:15.896 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:16.156 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2256576 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:36:16.156 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2256576 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:36:16.156 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:16.156 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:16.156 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:16.156 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:16.156 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:16.156 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:16.156 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:16.156 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:16.156 08:35:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:36:16.156 08:35:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2256938 00:36:16.156 08:35:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:36:16.156 08:35:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:36:16.156 08:35:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:16.156 08:35:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2256571 0 00:36:16.156 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2256571 0 busy 00:36:16.156 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2256571 00:36:16.156 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:16.156 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:36:16.156 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:36:16.156 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:16.156 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:36:16.156 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:16.156 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:16.156 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:16.156 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2256571 -w 256 00:36:16.156 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:16.417 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2256571 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.49 reactor_0' 00:36:16.417 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2256571 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.49 reactor_0 00:36:16.417 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:16.417 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:16.417 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:36:16.417 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:36:16.417 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:36:16.417 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:36:16.417 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:36:16.417 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:16.417 08:35:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:36:16.417 08:35:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:36:16.417 08:35:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2256571 1 00:36:16.417 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2256571 1 busy 00:36:16.417 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2256571 00:36:16.417 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:16.417 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:36:16.417 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:36:16.417 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:16.417 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:36:16.417 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:16.417 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:16.417 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:16.417 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2256571 -w 256 00:36:16.417 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:16.417 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2256576 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.27 reactor_1' 00:36:16.417 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2256576 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.27 reactor_1 00:36:16.417 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:16.417 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:16.417 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:36:16.417 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:36:16.679 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:36:16.679 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:36:16.679 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:36:16.679 08:35:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:16.679 08:35:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2256938 00:36:26.682 Initializing NVMe Controllers 00:36:26.682 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:26.682 Controller IO queue size 256, less than required. 00:36:26.682 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:26.682 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:36:26.682 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:36:26.682 Initialization complete. Launching workers. 00:36:26.682 ======================================================== 00:36:26.682 Latency(us) 00:36:26.682 Device Information : IOPS MiB/s Average min max 00:36:26.682 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 19417.50 75.85 13188.61 4107.72 31991.56 00:36:26.682 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19793.70 77.32 12934.86 8094.49 28297.02 00:36:26.682 ======================================================== 00:36:26.682 Total : 39211.20 153.17 13060.52 4107.72 31991.56 00:36:26.682 00:36:26.682 08:35:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:36:26.682 08:35:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2256571 0 00:36:26.682 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2256571 0 idle 00:36:26.682 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2256571 00:36:26.682 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:26.682 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:26.682 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:26.682 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2256571 -w 256 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2256571 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.30 reactor_0' 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2256571 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.30 reactor_0 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2256571 1 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2256571 1 idle 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2256571 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2256571 -w 256 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2256576 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:09.99 reactor_1' 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2256576 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:09.99 reactor_1 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:26.683 08:35:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:27.256 08:35:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:36:27.256 08:35:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:36:27.256 08:35:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:36:27.256 08:35:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:36:27.256 08:35:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2256571 0 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2256571 0 idle 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2256571 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2256571 -w 256 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2256571 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.69 reactor_0' 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2256571 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.69 reactor_0 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2256571 1 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2256571 1 idle 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2256571 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2256571 -w 256 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2256576 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1' 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2256576 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:29.805 08:35:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:30.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:30.066 08:35:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:30.066 08:35:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:36:30.066 08:35:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:36:30.066 08:35:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:30.066 08:35:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:36:30.066 08:35:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:30.066 08:35:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:36:30.066 08:35:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:36:30.066 08:35:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:36:30.066 08:35:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:30.066 08:35:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:36:30.066 08:35:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:30.066 08:35:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:36:30.066 08:35:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:30.066 08:35:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:30.066 rmmod nvme_tcp 00:36:30.066 rmmod nvme_fabrics 00:36:30.066 rmmod nvme_keyring 00:36:30.066 08:35:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:30.066 08:35:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:36:30.066 08:35:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:36:30.066 08:35:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2256571 ']' 00:36:30.066 08:35:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2256571 00:36:30.066 08:35:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2256571 ']' 00:36:30.066 08:35:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2256571 00:36:30.066 08:35:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:36:30.066 08:35:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:30.066 08:35:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2256571 00:36:30.066 08:35:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:30.066 08:35:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:30.066 08:35:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2256571' 00:36:30.066 killing process with pid 2256571 00:36:30.066 08:35:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2256571 00:36:30.066 08:35:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2256571 00:36:30.327 08:35:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:30.327 08:35:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:30.327 08:35:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:30.327 08:35:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:36:30.327 08:35:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:36:30.327 08:35:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:30.327 08:35:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:36:30.327 08:35:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:30.327 08:35:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:30.327 08:35:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:30.327 08:35:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:30.327 08:35:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:32.875 08:35:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:32.875 00:36:32.875 real 0m25.488s 00:36:32.875 user 0m40.308s 00:36:32.875 sys 0m9.845s 00:36:32.875 08:35:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:32.875 08:35:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:32.875 ************************************ 00:36:32.875 END TEST nvmf_interrupt 00:36:32.875 ************************************ 00:36:32.875 00:36:32.875 real 30m8.665s 00:36:32.875 user 61m28.198s 00:36:32.875 sys 10m21.830s 00:36:32.875 08:35:29 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:32.875 08:35:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:32.875 ************************************ 00:36:32.875 END TEST nvmf_tcp 00:36:32.875 ************************************ 00:36:32.875 08:35:29 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:36:32.875 08:35:29 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:32.875 08:35:29 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:32.875 08:35:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:32.875 08:35:29 -- common/autotest_common.sh@10 -- # set +x 00:36:32.875 ************************************ 00:36:32.875 START TEST spdkcli_nvmf_tcp 00:36:32.875 ************************************ 00:36:32.875 08:35:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:32.875 * Looking for test storage... 00:36:32.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:36:32.875 08:35:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:32.875 08:35:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:36:32.875 08:35:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:32.875 08:35:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:32.875 08:35:29 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:32.875 08:35:29 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:32.875 08:35:29 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:32.875 08:35:29 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:36:32.875 08:35:29 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:36:32.875 08:35:29 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:36:32.875 08:35:29 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:36:32.875 08:35:29 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:32.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:32.876 --rc genhtml_branch_coverage=1 00:36:32.876 --rc genhtml_function_coverage=1 00:36:32.876 --rc genhtml_legend=1 00:36:32.876 --rc geninfo_all_blocks=1 00:36:32.876 --rc geninfo_unexecuted_blocks=1 00:36:32.876 00:36:32.876 ' 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:32.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:32.876 --rc genhtml_branch_coverage=1 00:36:32.876 --rc genhtml_function_coverage=1 00:36:32.876 --rc genhtml_legend=1 00:36:32.876 --rc geninfo_all_blocks=1 00:36:32.876 --rc geninfo_unexecuted_blocks=1 00:36:32.876 00:36:32.876 ' 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:32.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:32.876 --rc genhtml_branch_coverage=1 00:36:32.876 --rc genhtml_function_coverage=1 00:36:32.876 --rc genhtml_legend=1 00:36:32.876 --rc geninfo_all_blocks=1 00:36:32.876 --rc geninfo_unexecuted_blocks=1 00:36:32.876 00:36:32.876 ' 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:32.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:32.876 --rc genhtml_branch_coverage=1 00:36:32.876 --rc genhtml_function_coverage=1 00:36:32.876 --rc genhtml_legend=1 00:36:32.876 --rc geninfo_all_blocks=1 00:36:32.876 --rc geninfo_unexecuted_blocks=1 00:36:32.876 00:36:32.876 ' 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:32.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2260124 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2260124 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2260124 ']' 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:32.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:32.876 08:35:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:32.876 [2024-11-28 08:35:30.007241] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:36:32.876 [2024-11-28 08:35:30.007312] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2260124 ] 00:36:32.876 [2024-11-28 08:35:30.100184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:32.876 [2024-11-28 08:35:30.156812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:32.876 [2024-11-28 08:35:30.156818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:33.822 08:35:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:33.822 08:35:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:36:33.822 08:35:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:36:33.822 08:35:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:33.822 08:35:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:33.822 08:35:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:36:33.822 08:35:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:36:33.822 08:35:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:36:33.822 08:35:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:33.822 08:35:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:33.822 08:35:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:36:33.822 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:36:33.822 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:36:33.822 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:36:33.822 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:36:33.822 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:36:33.822 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:36:33.822 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:33.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:36:33.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:36:33.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:33.822 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:33.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:36:33.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:33.822 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:33.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:36:33.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:33.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:33.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:33.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:33.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:36:33.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:36:33.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:33.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:36:33.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:33.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:36:33.822 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:36:33.822 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:36:33.822 ' 00:36:36.370 [2024-11-28 08:35:33.633223] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:37.754 [2024-11-28 08:35:35.001446] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:36:40.298 [2024-11-28 08:35:37.528453] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:36:42.846 [2024-11-28 08:35:39.754760] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:36:44.231 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:36:44.231 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:36:44.231 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:36:44.231 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:36:44.231 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:36:44.231 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:36:44.231 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:36:44.231 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:44.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:36:44.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:36:44.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:44.231 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:44.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:36:44.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:44.231 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:44.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:36:44.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:44.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:44.231 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:44.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:44.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:36:44.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:36:44.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:44.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:36:44.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:44.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:36:44.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:36:44.232 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:36:44.493 08:35:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:36:44.493 08:35:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:44.493 08:35:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:44.493 08:35:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:36:44.493 08:35:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:44.493 08:35:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:44.493 08:35:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:36:44.493 08:35:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:36:44.754 08:35:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:36:44.754 08:35:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:36:44.754 08:35:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:36:44.754 08:35:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:44.754 08:35:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:45.014 08:35:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:36:45.014 08:35:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:45.014 08:35:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:45.015 08:35:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:36:45.015 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:36:45.015 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:45.015 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:36:45.015 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:36:45.015 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:36:45.015 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:36:45.015 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:45.015 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:36:45.015 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:36:45.015 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:36:45.015 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:36:45.015 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:36:45.015 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:36:45.015 ' 00:36:51.599 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:36:51.599 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:36:51.599 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:51.599 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:36:51.599 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:36:51.599 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:36:51.599 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:36:51.599 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:51.599 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:36:51.599 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:36:51.599 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:36:51.599 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:36:51.599 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:36:51.599 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:36:51.599 08:35:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:36:51.599 08:35:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:51.599 08:35:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:51.599 08:35:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2260124 00:36:51.599 08:35:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2260124 ']' 00:36:51.599 08:35:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2260124 00:36:51.599 08:35:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:36:51.599 08:35:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:51.599 08:35:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2260124 00:36:51.599 08:35:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:51.599 08:35:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:51.599 08:35:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2260124' 00:36:51.599 killing process with pid 2260124 00:36:51.599 08:35:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2260124 00:36:51.599 08:35:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2260124 00:36:51.599 08:35:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:36:51.599 08:35:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:36:51.599 08:35:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2260124 ']' 00:36:51.599 08:35:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2260124 00:36:51.599 08:35:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2260124 ']' 00:36:51.599 08:35:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2260124 00:36:51.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2260124) - No such process 00:36:51.599 08:35:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2260124 is not found' 00:36:51.599 Process with pid 2260124 is not found 00:36:51.599 08:35:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:36:51.599 08:35:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:36:51.599 08:35:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:36:51.599 00:36:51.599 real 0m18.237s 00:36:51.599 user 0m40.487s 00:36:51.599 sys 0m0.947s 00:36:51.599 08:35:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:51.599 08:35:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:51.599 ************************************ 00:36:51.599 END TEST spdkcli_nvmf_tcp 00:36:51.599 ************************************ 00:36:51.599 08:35:47 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:51.599 08:35:47 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:51.599 08:35:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:51.599 08:35:47 -- common/autotest_common.sh@10 -- # set +x 00:36:51.599 ************************************ 00:36:51.599 START TEST nvmf_identify_passthru 00:36:51.599 ************************************ 00:36:51.599 08:35:48 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:51.599 * Looking for test storage... 00:36:51.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:51.600 08:35:48 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:51.600 08:35:48 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:36:51.600 08:35:48 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:51.600 08:35:48 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:51.600 08:35:48 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:51.600 08:35:48 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:51.600 08:35:48 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:51.600 08:35:48 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:36:51.600 08:35:48 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:36:51.600 08:35:48 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:36:51.600 08:35:48 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:36:51.600 08:35:48 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:36:51.600 08:35:48 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:36:51.600 08:35:48 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:36:51.600 08:35:48 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:51.600 08:35:48 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:36:51.600 08:35:48 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:36:51.600 08:35:48 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:51.600 08:35:48 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:51.600 08:35:48 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:36:51.600 08:35:48 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:36:51.600 08:35:48 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:51.600 08:35:48 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:36:51.600 08:35:48 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:36:51.600 08:35:48 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:36:51.600 08:35:48 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:36:51.600 08:35:48 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:51.600 08:35:48 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:36:51.600 08:35:48 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:36:51.600 08:35:48 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:51.600 08:35:48 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:51.600 08:35:48 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:36:51.600 08:35:48 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:51.600 08:35:48 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:51.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:51.600 --rc genhtml_branch_coverage=1 00:36:51.600 --rc genhtml_function_coverage=1 00:36:51.600 --rc genhtml_legend=1 00:36:51.600 --rc geninfo_all_blocks=1 00:36:51.600 --rc geninfo_unexecuted_blocks=1 00:36:51.600 00:36:51.600 ' 00:36:51.600 08:35:48 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:51.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:51.600 --rc genhtml_branch_coverage=1 00:36:51.600 --rc genhtml_function_coverage=1 00:36:51.600 --rc genhtml_legend=1 00:36:51.600 --rc geninfo_all_blocks=1 00:36:51.600 --rc geninfo_unexecuted_blocks=1 00:36:51.600 00:36:51.600 ' 00:36:51.600 08:35:48 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:51.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:51.600 --rc genhtml_branch_coverage=1 00:36:51.600 --rc genhtml_function_coverage=1 00:36:51.600 --rc genhtml_legend=1 00:36:51.600 --rc geninfo_all_blocks=1 00:36:51.600 --rc geninfo_unexecuted_blocks=1 00:36:51.600 00:36:51.600 ' 00:36:51.600 08:35:48 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:51.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:51.600 --rc genhtml_branch_coverage=1 00:36:51.600 --rc genhtml_function_coverage=1 00:36:51.600 --rc genhtml_legend=1 00:36:51.600 --rc geninfo_all_blocks=1 00:36:51.600 --rc geninfo_unexecuted_blocks=1 00:36:51.600 00:36:51.600 ' 00:36:51.600 08:35:48 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:51.600 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:36:51.600 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:51.600 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:51.600 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:51.600 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:51.600 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:51.600 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:51.600 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:51.600 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:51.600 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:51.600 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:51.600 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:51.600 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:51.600 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:51.600 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:51.600 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:51.600 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:51.600 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:51.600 08:35:48 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:51.600 08:35:48 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:51.600 08:35:48 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:51.600 08:35:48 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:51.600 08:35:48 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.600 08:35:48 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.600 08:35:48 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.600 08:35:48 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:51.600 08:35:48 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.600 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:36:51.600 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:51.600 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:51.600 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:51.600 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:51.600 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:51.600 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:51.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:51.600 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:51.600 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:51.600 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:51.600 08:35:48 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:51.600 08:35:48 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:51.600 08:35:48 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:51.600 08:35:48 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:51.600 08:35:48 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:51.600 08:35:48 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.600 08:35:48 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.600 08:35:48 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.600 08:35:48 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:51.601 08:35:48 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.601 08:35:48 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:36:51.601 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:51.601 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:51.601 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:51.601 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:51.601 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:51.601 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:51.601 08:35:48 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:51.601 08:35:48 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:51.601 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:51.601 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:51.601 08:35:48 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:36:51.601 08:35:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:58.191 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:58.191 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:58.191 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:58.192 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:58.192 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:58.192 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:58.192 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:58.192 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:58.192 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:58.192 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:58.192 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:58.192 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:58.192 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:58.192 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:58.192 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:58.192 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:58.192 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:58.192 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:58.192 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:58.192 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:58.192 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:58.192 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:58.192 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:36:58.192 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:58.192 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:58.192 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:58.192 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:58.192 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:58.192 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:58.192 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:58.192 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:58.192 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:58.192 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:58.192 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:58.192 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:58.192 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:58.192 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:58.192 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:58.192 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:58.453 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:58.453 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:58.453 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:58.453 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:58.453 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:58.453 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:58.453 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:58.714 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:58.714 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:58.714 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:58.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:58.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:36:58.714 00:36:58.714 --- 10.0.0.2 ping statistics --- 00:36:58.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:58.714 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:36:58.714 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:58.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:58.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:36:58.714 00:36:58.714 --- 10.0.0.1 ping statistics --- 00:36:58.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:58.714 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:36:58.714 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:58.714 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:36:58.714 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:58.714 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:58.714 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:58.714 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:58.714 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:58.714 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:58.714 08:35:55 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:58.714 08:35:55 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:36:58.714 08:35:55 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:58.714 08:35:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:58.714 08:35:55 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:36:58.714 08:35:55 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:36:58.714 08:35:55 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:36:58.714 08:35:55 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:36:58.714 08:35:55 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:36:58.714 08:35:55 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:36:58.714 08:35:55 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:36:58.714 08:35:55 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:58.714 08:35:55 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:58.714 08:35:55 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:36:58.714 08:35:55 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:36:58.714 08:35:55 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:36:58.714 08:35:55 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:36:58.714 08:35:55 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:36:58.714 08:35:55 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:36:58.714 08:35:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:58.714 08:35:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:36:58.714 08:35:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:36:59.285 08:35:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:36:59.285 08:35:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:59.285 08:35:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:36:59.285 08:35:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:59.856 08:35:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:36:59.856 08:35:56 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:59.856 08:35:56 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:59.856 08:35:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:59.856 08:35:56 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:59.856 08:35:56 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:59.856 08:35:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:59.856 08:35:57 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2267534 00:36:59.856 08:35:57 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:59.856 08:35:57 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:59.856 08:35:57 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2267534 00:36:59.856 08:35:57 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2267534 ']' 00:36:59.856 08:35:57 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:59.856 08:35:57 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:59.856 08:35:57 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:59.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:59.856 08:35:57 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:59.856 08:35:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:59.856 [2024-11-28 08:35:57.059191] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:36:59.856 [2024-11-28 08:35:57.059263] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:00.118 [2024-11-28 08:35:57.156041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:00.118 [2024-11-28 08:35:57.210221] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:00.118 [2024-11-28 08:35:57.210275] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:00.118 [2024-11-28 08:35:57.210284] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:00.118 [2024-11-28 08:35:57.210291] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:00.118 [2024-11-28 08:35:57.210297] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:00.118 [2024-11-28 08:35:57.212334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:00.118 [2024-11-28 08:35:57.212496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:00.118 [2024-11-28 08:35:57.212658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:00.118 [2024-11-28 08:35:57.212658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:00.691 08:35:57 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:00.691 08:35:57 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:37:00.691 08:35:57 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:37:00.691 08:35:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.691 08:35:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:00.691 INFO: Log level set to 20 00:37:00.691 INFO: Requests: 00:37:00.691 { 00:37:00.691 "jsonrpc": "2.0", 00:37:00.691 "method": "nvmf_set_config", 00:37:00.691 "id": 1, 00:37:00.691 "params": { 00:37:00.691 "admin_cmd_passthru": { 00:37:00.691 "identify_ctrlr": true 00:37:00.691 } 00:37:00.691 } 00:37:00.691 } 00:37:00.691 00:37:00.691 INFO: response: 00:37:00.691 { 00:37:00.691 "jsonrpc": "2.0", 00:37:00.691 "id": 1, 00:37:00.691 "result": true 00:37:00.691 } 00:37:00.691 00:37:00.691 08:35:57 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.691 08:35:57 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:37:00.691 08:35:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.691 08:35:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:00.691 INFO: Setting log level to 20 00:37:00.691 INFO: Setting log level to 20 00:37:00.691 INFO: Log level set to 20 00:37:00.691 INFO: Log level set to 20 00:37:00.691 INFO: Requests: 00:37:00.691 { 00:37:00.691 "jsonrpc": "2.0", 00:37:00.691 "method": "framework_start_init", 00:37:00.691 "id": 1 00:37:00.691 } 00:37:00.691 00:37:00.691 INFO: Requests: 00:37:00.691 { 00:37:00.691 "jsonrpc": "2.0", 00:37:00.691 "method": "framework_start_init", 00:37:00.691 "id": 1 00:37:00.691 } 00:37:00.691 00:37:00.691 [2024-11-28 08:35:57.968876] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:37:00.691 INFO: response: 00:37:00.691 { 00:37:00.691 "jsonrpc": "2.0", 00:37:00.691 "id": 1, 00:37:00.691 "result": true 00:37:00.691 } 00:37:00.691 00:37:00.691 INFO: response: 00:37:00.691 { 00:37:00.691 "jsonrpc": "2.0", 00:37:00.691 "id": 1, 00:37:00.691 "result": true 00:37:00.691 } 00:37:00.691 00:37:00.691 08:35:57 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.691 08:35:57 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:00.691 08:35:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.691 08:35:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:00.952 INFO: Setting log level to 40 00:37:00.952 INFO: Setting log level to 40 00:37:00.952 INFO: Setting log level to 40 00:37:00.952 [2024-11-28 08:35:57.982440] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:00.952 08:35:57 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.952 08:35:57 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:37:00.952 08:35:57 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:00.952 08:35:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:00.952 08:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:37:00.952 08:35:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.952 08:35:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:01.213 Nvme0n1 00:37:01.213 08:35:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.213 08:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:37:01.213 08:35:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.213 08:35:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:01.213 08:35:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.213 08:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:37:01.213 08:35:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.213 08:35:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:01.213 08:35:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.213 08:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:01.213 08:35:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.213 08:35:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:01.213 [2024-11-28 08:35:58.401103] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:01.213 08:35:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.213 08:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:37:01.213 08:35:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.213 08:35:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:01.213 [ 00:37:01.213 { 00:37:01.213 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:37:01.213 "subtype": "Discovery", 00:37:01.213 "listen_addresses": [], 00:37:01.213 "allow_any_host": true, 00:37:01.213 "hosts": [] 00:37:01.213 }, 00:37:01.213 { 00:37:01.213 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:37:01.213 "subtype": "NVMe", 00:37:01.213 "listen_addresses": [ 00:37:01.213 { 00:37:01.213 "trtype": "TCP", 00:37:01.213 "adrfam": "IPv4", 00:37:01.213 "traddr": "10.0.0.2", 00:37:01.213 "trsvcid": "4420" 00:37:01.213 } 00:37:01.213 ], 00:37:01.213 "allow_any_host": true, 00:37:01.213 "hosts": [], 00:37:01.213 "serial_number": "SPDK00000000000001", 00:37:01.213 "model_number": "SPDK bdev Controller", 00:37:01.213 "max_namespaces": 1, 00:37:01.213 "min_cntlid": 1, 00:37:01.213 "max_cntlid": 65519, 00:37:01.213 "namespaces": [ 00:37:01.213 { 00:37:01.213 "nsid": 1, 00:37:01.213 "bdev_name": "Nvme0n1", 00:37:01.213 "name": "Nvme0n1", 00:37:01.213 "nguid": "36344730526054870025384500000044", 00:37:01.213 "uuid": "36344730-5260-5487-0025-384500000044" 00:37:01.213 } 00:37:01.213 ] 00:37:01.213 } 00:37:01.213 ] 00:37:01.213 08:35:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.213 08:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:01.213 08:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:37:01.213 08:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:37:01.475 08:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:37:01.475 08:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:01.475 08:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:37:01.475 08:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:37:01.736 08:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:37:01.736 08:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:37:01.736 08:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:37:01.736 08:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:01.736 08:35:58 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.736 08:35:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:01.736 08:35:58 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.736 08:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:37:01.736 08:35:58 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:37:01.736 08:35:58 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:01.736 08:35:58 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:37:01.736 08:35:58 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:01.736 08:35:58 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:37:01.736 08:35:58 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:01.736 08:35:58 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:01.736 rmmod nvme_tcp 00:37:01.736 rmmod nvme_fabrics 00:37:01.736 rmmod nvme_keyring 00:37:01.736 08:35:58 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:01.736 08:35:58 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:37:01.736 08:35:58 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:37:01.736 08:35:58 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2267534 ']' 00:37:01.736 08:35:58 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2267534 00:37:01.736 08:35:58 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2267534 ']' 00:37:01.736 08:35:58 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2267534 00:37:01.736 08:35:58 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:37:01.736 08:35:58 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:01.736 08:35:58 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2267534 00:37:01.736 08:35:59 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:01.736 08:35:59 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:01.737 08:35:59 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2267534' 00:37:01.737 killing process with pid 2267534 00:37:01.737 08:35:59 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2267534 00:37:01.737 08:35:59 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2267534 00:37:02.309 08:35:59 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:02.310 08:35:59 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:02.310 08:35:59 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:02.310 08:35:59 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:37:02.310 08:35:59 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:37:02.310 08:35:59 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:02.310 08:35:59 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:37:02.310 08:35:59 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:02.310 08:35:59 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:02.310 08:35:59 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:02.310 08:35:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:02.310 08:35:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:04.223 08:36:01 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:04.223 00:37:04.223 real 0m13.356s 00:37:04.223 user 0m10.451s 00:37:04.223 sys 0m6.933s 00:37:04.223 08:36:01 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:04.223 08:36:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:04.223 ************************************ 00:37:04.223 END TEST nvmf_identify_passthru 00:37:04.223 ************************************ 00:37:04.223 08:36:01 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:37:04.223 08:36:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:04.223 08:36:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:04.223 08:36:01 -- common/autotest_common.sh@10 -- # set +x 00:37:04.223 ************************************ 00:37:04.223 START TEST nvmf_dif 00:37:04.223 ************************************ 00:37:04.223 08:36:01 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:37:04.484 * Looking for test storage... 00:37:04.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:04.484 08:36:01 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:04.484 08:36:01 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:37:04.484 08:36:01 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:04.484 08:36:01 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:04.484 08:36:01 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:04.484 08:36:01 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:04.484 08:36:01 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:04.484 08:36:01 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:37:04.484 08:36:01 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:37:04.484 08:36:01 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:37:04.484 08:36:01 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:37:04.484 08:36:01 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:37:04.485 08:36:01 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:37:04.485 08:36:01 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:37:04.485 08:36:01 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:04.485 08:36:01 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:37:04.485 08:36:01 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:37:04.485 08:36:01 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:04.485 08:36:01 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:04.485 08:36:01 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:37:04.485 08:36:01 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:37:04.485 08:36:01 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:04.485 08:36:01 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:37:04.485 08:36:01 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:37:04.485 08:36:01 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:37:04.485 08:36:01 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:37:04.485 08:36:01 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:04.485 08:36:01 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:37:04.485 08:36:01 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:37:04.485 08:36:01 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:04.485 08:36:01 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:04.485 08:36:01 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:37:04.485 08:36:01 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:04.485 08:36:01 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:04.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:04.485 --rc genhtml_branch_coverage=1 00:37:04.485 --rc genhtml_function_coverage=1 00:37:04.485 --rc genhtml_legend=1 00:37:04.485 --rc geninfo_all_blocks=1 00:37:04.485 --rc geninfo_unexecuted_blocks=1 00:37:04.485 00:37:04.485 ' 00:37:04.485 08:36:01 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:04.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:04.485 --rc genhtml_branch_coverage=1 00:37:04.485 --rc genhtml_function_coverage=1 00:37:04.485 --rc genhtml_legend=1 00:37:04.485 --rc geninfo_all_blocks=1 00:37:04.485 --rc geninfo_unexecuted_blocks=1 00:37:04.485 00:37:04.485 ' 00:37:04.485 08:36:01 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:04.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:04.485 --rc genhtml_branch_coverage=1 00:37:04.485 --rc genhtml_function_coverage=1 00:37:04.485 --rc genhtml_legend=1 00:37:04.485 --rc geninfo_all_blocks=1 00:37:04.485 --rc geninfo_unexecuted_blocks=1 00:37:04.485 00:37:04.485 ' 00:37:04.485 08:36:01 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:04.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:04.485 --rc genhtml_branch_coverage=1 00:37:04.485 --rc genhtml_function_coverage=1 00:37:04.485 --rc genhtml_legend=1 00:37:04.485 --rc geninfo_all_blocks=1 00:37:04.485 --rc geninfo_unexecuted_blocks=1 00:37:04.485 00:37:04.485 ' 00:37:04.485 08:36:01 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:04.485 08:36:01 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:37:04.485 08:36:01 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:04.485 08:36:01 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:04.485 08:36:01 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:04.485 08:36:01 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:04.485 08:36:01 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:04.485 08:36:01 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:04.485 08:36:01 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:04.485 08:36:01 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:04.485 08:36:01 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:04.485 08:36:01 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:04.485 08:36:01 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:04.485 08:36:01 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:04.485 08:36:01 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:04.485 08:36:01 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:04.485 08:36:01 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:04.485 08:36:01 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:04.485 08:36:01 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:04.485 08:36:01 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:37:04.485 08:36:01 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:04.485 08:36:01 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:04.485 08:36:01 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:04.485 08:36:01 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:04.485 08:36:01 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:04.485 08:36:01 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:04.485 08:36:01 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:37:04.485 08:36:01 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:04.485 08:36:01 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:37:04.485 08:36:01 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:04.485 08:36:01 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:04.485 08:36:01 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:04.485 08:36:01 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:04.485 08:36:01 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:04.485 08:36:01 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:04.485 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:04.485 08:36:01 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:04.485 08:36:01 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:04.485 08:36:01 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:04.485 08:36:01 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:37:04.485 08:36:01 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:37:04.485 08:36:01 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:37:04.485 08:36:01 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:37:04.485 08:36:01 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:37:04.485 08:36:01 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:04.485 08:36:01 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:04.485 08:36:01 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:04.486 08:36:01 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:04.486 08:36:01 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:04.486 08:36:01 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:04.486 08:36:01 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:04.486 08:36:01 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:04.486 08:36:01 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:04.486 08:36:01 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:04.486 08:36:01 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:37:04.486 08:36:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:12.631 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:12.631 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:12.631 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:12.631 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:12.631 08:36:08 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:12.632 08:36:08 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:12.632 08:36:08 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:12.632 08:36:08 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:12.632 08:36:08 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:12.632 08:36:08 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:12.632 08:36:08 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:12.632 08:36:08 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:12.632 08:36:08 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:12.632 08:36:08 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:12.632 08:36:08 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:12.632 08:36:08 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:12.632 08:36:08 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:12.632 08:36:08 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:12.632 08:36:08 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:12.632 08:36:09 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:12.632 08:36:09 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:12.632 08:36:09 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:12.632 08:36:09 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:12.632 08:36:09 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:12.632 08:36:09 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:12.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:12.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.587 ms 00:37:12.632 00:37:12.632 --- 10.0.0.2 ping statistics --- 00:37:12.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:12.632 rtt min/avg/max/mdev = 0.587/0.587/0.587/0.000 ms 00:37:12.632 08:36:09 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:12.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:12.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:37:12.632 00:37:12.632 --- 10.0.0.1 ping statistics --- 00:37:12.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:12.632 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:37:12.632 08:36:09 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:12.632 08:36:09 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:37:12.632 08:36:09 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:37:12.632 08:36:09 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:15.306 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:37:15.306 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:37:15.306 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:37:15.306 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:37:15.306 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:37:15.306 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:37:15.307 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:37:15.307 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:37:15.307 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:37:15.307 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:37:15.307 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:37:15.307 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:37:15.307 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:37:15.307 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:37:15.307 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:37:15.307 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:37:15.307 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:37:15.878 08:36:12 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:15.878 08:36:12 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:15.878 08:36:12 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:15.878 08:36:12 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:15.878 08:36:12 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:15.878 08:36:12 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:15.878 08:36:13 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:37:15.878 08:36:13 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:37:15.878 08:36:13 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:15.878 08:36:13 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:15.878 08:36:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:15.878 08:36:13 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2274219 00:37:15.878 08:36:13 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2274219 00:37:15.878 08:36:13 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:37:15.878 08:36:13 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2274219 ']' 00:37:15.878 08:36:13 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:15.878 08:36:13 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:15.878 08:36:13 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:15.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:15.878 08:36:13 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:15.878 08:36:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:15.878 [2024-11-28 08:36:13.084768] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:37:15.878 [2024-11-28 08:36:13.084835] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:16.139 [2024-11-28 08:36:13.185061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:16.139 [2024-11-28 08:36:13.236196] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:16.139 [2024-11-28 08:36:13.236243] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:16.139 [2024-11-28 08:36:13.236252] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:16.139 [2024-11-28 08:36:13.236259] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:16.139 [2024-11-28 08:36:13.236266] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:16.139 [2024-11-28 08:36:13.237036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:16.712 08:36:13 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:16.712 08:36:13 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:37:16.712 08:36:13 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:16.712 08:36:13 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:16.712 08:36:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:16.712 08:36:13 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:16.712 08:36:13 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:37:16.712 08:36:13 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:37:16.712 08:36:13 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.712 08:36:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:16.712 [2024-11-28 08:36:13.947758] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:16.712 08:36:13 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.712 08:36:13 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:37:16.712 08:36:13 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:16.712 08:36:13 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:16.712 08:36:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:16.712 ************************************ 00:37:16.712 START TEST fio_dif_1_default 00:37:16.712 ************************************ 00:37:16.712 08:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:37:16.712 08:36:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:37:16.712 08:36:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:37:16.712 08:36:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:37:16.712 08:36:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:37:16.712 08:36:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:37:16.712 08:36:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:16.712 08:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.712 08:36:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:16.974 bdev_null0 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:16.974 [2024-11-28 08:36:14.036213] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:16.974 { 00:37:16.974 "params": { 00:37:16.974 "name": "Nvme$subsystem", 00:37:16.974 "trtype": "$TEST_TRANSPORT", 00:37:16.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:16.974 "adrfam": "ipv4", 00:37:16.974 "trsvcid": "$NVMF_PORT", 00:37:16.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:16.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:16.974 "hdgst": ${hdgst:-false}, 00:37:16.974 "ddgst": ${ddgst:-false} 00:37:16.974 }, 00:37:16.974 "method": "bdev_nvme_attach_controller" 00:37:16.974 } 00:37:16.974 EOF 00:37:16.974 )") 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:16.974 "params": { 00:37:16.974 "name": "Nvme0", 00:37:16.974 "trtype": "tcp", 00:37:16.974 "traddr": "10.0.0.2", 00:37:16.974 "adrfam": "ipv4", 00:37:16.974 "trsvcid": "4420", 00:37:16.974 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:16.974 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:16.974 "hdgst": false, 00:37:16.974 "ddgst": false 00:37:16.974 }, 00:37:16.974 "method": "bdev_nvme_attach_controller" 00:37:16.974 }' 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:16.974 08:36:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:17.235 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:17.236 fio-3.35 00:37:17.236 Starting 1 thread 00:37:29.472 00:37:29.472 filename0: (groupid=0, jobs=1): err= 0: pid=2274798: Thu Nov 28 08:36:25 2024 00:37:29.472 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10016msec) 00:37:29.472 slat (nsec): min=5544, max=36073, avg=6313.52, stdev=1730.08 00:37:29.472 clat (usec): min=40867, max=43010, avg=41027.92, stdev=225.06 00:37:29.473 lat (usec): min=40873, max=43046, avg=41034.24, stdev=225.81 00:37:29.473 clat percentiles (usec): 00:37:29.473 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:37:29.473 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:29.473 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:29.473 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:37:29.473 | 99.99th=[43254] 00:37:29.473 bw ( KiB/s): min= 384, max= 416, per=99.54%, avg=388.80, stdev=11.72, samples=20 00:37:29.473 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:37:29.473 lat (msec) : 50=100.00% 00:37:29.473 cpu : usr=93.78%, sys=5.98%, ctx=6, majf=0, minf=213 00:37:29.473 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:29.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:29.473 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:29.473 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:29.473 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:29.473 00:37:29.473 Run status group 0 (all jobs): 00:37:29.473 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10016-10016msec 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.473 00:37:29.473 real 0m11.306s 00:37:29.473 user 0m18.223s 00:37:29.473 sys 0m1.041s 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:29.473 ************************************ 00:37:29.473 END TEST fio_dif_1_default 00:37:29.473 ************************************ 00:37:29.473 08:36:25 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:37:29.473 08:36:25 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:29.473 08:36:25 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:29.473 08:36:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:29.473 ************************************ 00:37:29.473 START TEST fio_dif_1_multi_subsystems 00:37:29.473 ************************************ 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:29.473 bdev_null0 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:29.473 [2024-11-28 08:36:25.422188] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:29.473 bdev_null1 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:29.473 { 00:37:29.473 "params": { 00:37:29.473 "name": "Nvme$subsystem", 00:37:29.473 "trtype": "$TEST_TRANSPORT", 00:37:29.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:29.473 "adrfam": "ipv4", 00:37:29.473 "trsvcid": "$NVMF_PORT", 00:37:29.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:29.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:29.473 "hdgst": ${hdgst:-false}, 00:37:29.473 "ddgst": ${ddgst:-false} 00:37:29.473 }, 00:37:29.473 "method": "bdev_nvme_attach_controller" 00:37:29.473 } 00:37:29.473 EOF 00:37:29.473 )") 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:29.473 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:37:29.474 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:37:29.474 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:29.474 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:29.474 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:37:29.474 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:29.474 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:29.474 { 00:37:29.474 "params": { 00:37:29.474 "name": "Nvme$subsystem", 00:37:29.474 "trtype": "$TEST_TRANSPORT", 00:37:29.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:29.474 "adrfam": "ipv4", 00:37:29.474 "trsvcid": "$NVMF_PORT", 00:37:29.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:29.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:29.474 "hdgst": ${hdgst:-false}, 00:37:29.474 "ddgst": ${ddgst:-false} 00:37:29.474 }, 00:37:29.474 "method": "bdev_nvme_attach_controller" 00:37:29.474 } 00:37:29.474 EOF 00:37:29.474 )") 00:37:29.474 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:37:29.474 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:29.474 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:37:29.474 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:37:29.474 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:37:29.474 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:29.474 "params": { 00:37:29.474 "name": "Nvme0", 00:37:29.474 "trtype": "tcp", 00:37:29.474 "traddr": "10.0.0.2", 00:37:29.474 "adrfam": "ipv4", 00:37:29.474 "trsvcid": "4420", 00:37:29.474 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:29.474 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:29.474 "hdgst": false, 00:37:29.474 "ddgst": false 00:37:29.474 }, 00:37:29.474 "method": "bdev_nvme_attach_controller" 00:37:29.474 },{ 00:37:29.474 "params": { 00:37:29.474 "name": "Nvme1", 00:37:29.474 "trtype": "tcp", 00:37:29.474 "traddr": "10.0.0.2", 00:37:29.474 "adrfam": "ipv4", 00:37:29.474 "trsvcid": "4420", 00:37:29.474 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:29.474 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:29.474 "hdgst": false, 00:37:29.474 "ddgst": false 00:37:29.474 }, 00:37:29.474 "method": "bdev_nvme_attach_controller" 00:37:29.474 }' 00:37:29.474 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:29.474 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:29.474 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:29.474 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:29.474 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:29.474 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:29.474 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:29.474 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:29.474 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:29.474 08:36:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:29.474 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:29.474 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:29.474 fio-3.35 00:37:29.474 Starting 2 threads 00:37:39.477 00:37:39.477 filename0: (groupid=0, jobs=1): err= 0: pid=2277014: Thu Nov 28 08:36:36 2024 00:37:39.477 read: IOPS=190, BW=763KiB/s (781kB/s)(7632KiB/10003msec) 00:37:39.477 slat (nsec): min=5543, max=31512, avg=6608.00, stdev=2120.66 00:37:39.477 clat (usec): min=530, max=41883, avg=20950.83, stdev=20150.28 00:37:39.477 lat (usec): min=536, max=41889, avg=20957.44, stdev=20150.12 00:37:39.477 clat percentiles (usec): 00:37:39.477 | 1.00th=[ 644], 5.00th=[ 799], 10.00th=[ 824], 20.00th=[ 840], 00:37:39.477 | 30.00th=[ 857], 40.00th=[ 865], 50.00th=[ 1844], 60.00th=[41157], 00:37:39.477 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:39.477 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:37:39.477 | 99.99th=[41681] 00:37:39.477 bw ( KiB/s): min= 704, max= 768, per=66.40%, avg=764.63, stdev=14.68, samples=19 00:37:39.477 iops : min= 176, max= 192, avg=191.16, stdev= 3.67, samples=19 00:37:39.477 lat (usec) : 750=1.89%, 1000=45.86% 00:37:39.477 lat (msec) : 2=2.36%, 50=49.90% 00:37:39.477 cpu : usr=95.99%, sys=3.79%, ctx=9, majf=0, minf=178 00:37:39.477 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:39.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.478 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.478 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:39.478 filename1: (groupid=0, jobs=1): err= 0: pid=2277015: Thu Nov 28 08:36:36 2024 00:37:39.478 read: IOPS=97, BW=389KiB/s (399kB/s)(3904KiB/10026msec) 00:37:39.478 slat (nsec): min=5544, max=31585, avg=6602.71, stdev=2200.36 00:37:39.478 clat (usec): min=40845, max=42200, avg=41069.01, stdev=280.72 00:37:39.478 lat (usec): min=40853, max=42226, avg=41075.61, stdev=281.57 00:37:39.478 clat percentiles (usec): 00:37:39.478 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:37:39.478 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:39.478 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:37:39.478 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:39.478 | 99.99th=[42206] 00:37:39.478 bw ( KiB/s): min= 384, max= 416, per=33.72%, avg=388.80, stdev=11.72, samples=20 00:37:39.478 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:37:39.478 lat (msec) : 50=100.00% 00:37:39.478 cpu : usr=95.13%, sys=4.66%, ctx=14, majf=0, minf=86 00:37:39.478 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:39.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:39.478 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:39.478 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:39.478 00:37:39.478 Run status group 0 (all jobs): 00:37:39.478 READ: bw=1151KiB/s (1178kB/s), 389KiB/s-763KiB/s (399kB/s-781kB/s), io=11.3MiB (11.8MB), run=10003-10026msec 00:37:39.739 08:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:37:39.739 08:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:37:39.739 08:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:39.739 08:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:39.739 08:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:37:39.739 08:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:39.739 08:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.739 08:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:39.740 08:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.740 08:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:39.740 08:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.740 08:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:39.740 08:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.740 08:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:39.740 08:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:39.740 08:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:37:39.740 08:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:39.740 08:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.740 08:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:39.740 08:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.740 08:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:39.740 08:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.740 08:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:39.740 08:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.740 00:37:39.740 real 0m11.501s 00:37:39.740 user 0m36.387s 00:37:39.740 sys 0m1.232s 00:37:39.740 08:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:39.740 08:36:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:39.740 ************************************ 00:37:39.740 END TEST fio_dif_1_multi_subsystems 00:37:39.740 ************************************ 00:37:39.740 08:36:36 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:37:39.740 08:36:36 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:39.740 08:36:36 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:39.740 08:36:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:39.740 ************************************ 00:37:39.740 START TEST fio_dif_rand_params 00:37:39.740 ************************************ 00:37:39.740 08:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:37:39.740 08:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:37:39.740 08:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:37:39.740 08:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:37:39.740 08:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:37:39.740 08:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:37:39.740 08:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:37:39.740 08:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:37:39.740 08:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:37:39.740 08:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:39.740 08:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:39.740 08:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:39.740 08:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:39.740 08:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:39.740 08:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.740 08:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.740 bdev_null0 00:37:39.740 08:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.740 08:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:39.740 08:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.740 08:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.740 08:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.740 08:36:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:39.740 08:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.740 08:36:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.740 08:36:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.740 08:36:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:39.740 08:36:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.740 08:36:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:39.740 [2024-11-28 08:36:37.008541] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:39.740 08:36:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.740 08:36:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:37:39.740 08:36:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:37:39.740 08:36:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:39.740 08:36:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:39.740 08:36:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:39.740 08:36:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:39.740 08:36:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:39.740 08:36:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:39.740 08:36:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:39.740 { 00:37:39.740 "params": { 00:37:39.740 "name": "Nvme$subsystem", 00:37:39.740 "trtype": "$TEST_TRANSPORT", 00:37:39.740 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:39.740 "adrfam": "ipv4", 00:37:39.740 "trsvcid": "$NVMF_PORT", 00:37:39.740 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:39.740 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:39.740 "hdgst": ${hdgst:-false}, 00:37:39.740 "ddgst": ${ddgst:-false} 00:37:39.740 }, 00:37:39.740 "method": "bdev_nvme_attach_controller" 00:37:39.740 } 00:37:39.740 EOF 00:37:39.740 )") 00:37:39.740 08:36:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:39.740 08:36:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:39.740 08:36:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:39.740 08:36:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:39.740 08:36:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:39.740 08:36:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:39.740 08:36:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:39.740 08:36:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:39.740 08:36:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:39.740 08:36:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:39.740 08:36:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:39.740 08:36:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:39.740 08:36:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:39.740 08:36:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:39.740 08:36:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:39.740 08:36:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:39.740 08:36:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:40.000 08:36:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:40.000 08:36:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:40.000 "params": { 00:37:40.000 "name": "Nvme0", 00:37:40.000 "trtype": "tcp", 00:37:40.000 "traddr": "10.0.0.2", 00:37:40.000 "adrfam": "ipv4", 00:37:40.000 "trsvcid": "4420", 00:37:40.000 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:40.000 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:40.000 "hdgst": false, 00:37:40.000 "ddgst": false 00:37:40.000 }, 00:37:40.000 "method": "bdev_nvme_attach_controller" 00:37:40.000 }' 00:37:40.000 08:36:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:40.000 08:36:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:40.001 08:36:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:40.001 08:36:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:40.001 08:36:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:40.001 08:36:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:40.001 08:36:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:40.001 08:36:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:40.001 08:36:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:40.001 08:36:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:40.261 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:40.261 ... 00:37:40.261 fio-3.35 00:37:40.261 Starting 3 threads 00:37:46.851 00:37:46.851 filename0: (groupid=0, jobs=1): err= 0: pid=2279221: Thu Nov 28 08:36:42 2024 00:37:46.851 read: IOPS=235, BW=29.5MiB/s (30.9MB/s)(149MiB/5038msec) 00:37:46.851 slat (nsec): min=5568, max=72600, avg=8773.20, stdev=2078.96 00:37:46.851 clat (usec): min=3467, max=91016, avg=12709.86, stdev=15336.77 00:37:46.851 lat (usec): min=3473, max=91025, avg=12718.63, stdev=15336.87 00:37:46.851 clat percentiles (usec): 00:37:46.851 | 1.00th=[ 4293], 5.00th=[ 5211], 10.00th=[ 5604], 20.00th=[ 5997], 00:37:46.851 | 30.00th=[ 6456], 40.00th=[ 7308], 50.00th=[ 7832], 60.00th=[ 8094], 00:37:46.851 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[46400], 95.00th=[49021], 00:37:46.851 | 99.00th=[87557], 99.50th=[89654], 99.90th=[90702], 99.95th=[90702], 00:37:46.851 | 99.99th=[90702] 00:37:46.851 bw ( KiB/s): min=16128, max=48384, per=27.45%, avg=30336.00, stdev=10587.01, samples=10 00:37:46.851 iops : min= 126, max= 378, avg=237.00, stdev=82.71, samples=10 00:37:46.851 lat (msec) : 4=0.25%, 10=87.21%, 20=0.34%, 50=9.26%, 100=2.95% 00:37:46.851 cpu : usr=93.39%, sys=4.76%, ctx=287, majf=0, minf=101 00:37:46.851 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:46.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.851 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.851 issued rwts: total=1188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.851 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:46.851 filename0: (groupid=0, jobs=1): err= 0: pid=2279222: Thu Nov 28 08:36:42 2024 00:37:46.851 read: IOPS=280, BW=35.1MiB/s (36.8MB/s)(176MiB/5024msec) 00:37:46.851 slat (nsec): min=5557, max=33037, avg=8479.46, stdev=1947.40 00:37:46.851 clat (usec): min=3488, max=90565, avg=10685.47, stdev=12325.87 00:37:46.851 lat (usec): min=3494, max=90571, avg=10693.95, stdev=12325.99 00:37:46.851 clat percentiles (usec): 00:37:46.851 | 1.00th=[ 3982], 5.00th=[ 4359], 10.00th=[ 5145], 20.00th=[ 5735], 00:37:46.851 | 30.00th=[ 6128], 40.00th=[ 6652], 50.00th=[ 7439], 60.00th=[ 7963], 00:37:46.851 | 70.00th=[ 8455], 80.00th=[ 8979], 90.00th=[10290], 95.00th=[47449], 00:37:46.851 | 99.00th=[50594], 99.50th=[52167], 99.90th=[89654], 99.95th=[90702], 00:37:46.851 | 99.99th=[90702] 00:37:46.851 bw ( KiB/s): min=26880, max=47360, per=32.57%, avg=35993.60, stdev=6159.62, samples=10 00:37:46.851 iops : min= 210, max= 370, avg=281.20, stdev=48.12, samples=10 00:37:46.851 lat (msec) : 4=1.06%, 10=87.51%, 20=3.19%, 50=6.60%, 100=1.63% 00:37:46.851 cpu : usr=95.66%, sys=4.08%, ctx=8, majf=0, minf=83 00:37:46.851 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:46.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.851 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.851 issued rwts: total=1409,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.851 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:46.851 filename0: (groupid=0, jobs=1): err= 0: pid=2279223: Thu Nov 28 08:36:42 2024 00:37:46.851 read: IOPS=349, BW=43.7MiB/s (45.8MB/s)(219MiB/5014msec) 00:37:46.851 slat (nsec): min=5556, max=32005, avg=8412.66, stdev=2253.51 00:37:46.851 clat (usec): min=3514, max=88026, avg=8569.84, stdev=8062.93 00:37:46.851 lat (usec): min=3523, max=88032, avg=8578.25, stdev=8062.80 00:37:46.851 clat percentiles (usec): 00:37:46.851 | 1.00th=[ 3818], 5.00th=[ 4146], 10.00th=[ 4490], 20.00th=[ 5080], 00:37:46.851 | 30.00th=[ 5800], 40.00th=[ 6259], 50.00th=[ 6849], 60.00th=[ 7635], 00:37:46.851 | 70.00th=[ 8455], 80.00th=[ 9241], 90.00th=[10290], 95.00th=[11076], 00:37:46.851 | 99.00th=[47973], 99.50th=[49021], 99.90th=[50070], 99.95th=[87557], 00:37:46.851 | 99.99th=[87557] 00:37:46.851 bw ( KiB/s): min=35328, max=53504, per=40.54%, avg=44800.00, stdev=6764.52, samples=10 00:37:46.851 iops : min= 276, max= 418, avg=350.00, stdev=52.85, samples=10 00:37:46.851 lat (msec) : 4=3.25%, 10=84.26%, 20=8.61%, 50=3.76%, 100=0.11% 00:37:46.851 cpu : usr=91.72%, sys=6.70%, ctx=496, majf=0, minf=81 00:37:46.851 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:46.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.851 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.851 issued rwts: total=1753,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.851 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:46.851 00:37:46.851 Run status group 0 (all jobs): 00:37:46.851 READ: bw=108MiB/s (113MB/s), 29.5MiB/s-43.7MiB/s (30.9MB/s-45.8MB/s), io=544MiB (570MB), run=5014-5038msec 00:37:46.851 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:37:46.851 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:46.851 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:46.851 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:46.851 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:46.851 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:46.851 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.851 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.851 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.851 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:46.851 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.851 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.851 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.851 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:37:46.851 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:37:46.851 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:37:46.851 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:37:46.851 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:37:46.851 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:37:46.851 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:37:46.851 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:46.851 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.852 bdev_null0 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.852 [2024-11-28 08:36:43.172203] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.852 bdev_null1 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.852 bdev_null2 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:46.852 { 00:37:46.852 "params": { 00:37:46.852 "name": "Nvme$subsystem", 00:37:46.852 "trtype": "$TEST_TRANSPORT", 00:37:46.852 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:46.852 "adrfam": "ipv4", 00:37:46.852 "trsvcid": "$NVMF_PORT", 00:37:46.852 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:46.852 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:46.852 "hdgst": ${hdgst:-false}, 00:37:46.852 "ddgst": ${ddgst:-false} 00:37:46.852 }, 00:37:46.852 "method": "bdev_nvme_attach_controller" 00:37:46.852 } 00:37:46.852 EOF 00:37:46.852 )") 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:46.852 { 00:37:46.852 "params": { 00:37:46.852 "name": "Nvme$subsystem", 00:37:46.852 "trtype": "$TEST_TRANSPORT", 00:37:46.852 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:46.852 "adrfam": "ipv4", 00:37:46.852 "trsvcid": "$NVMF_PORT", 00:37:46.852 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:46.852 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:46.852 "hdgst": ${hdgst:-false}, 00:37:46.852 "ddgst": ${ddgst:-false} 00:37:46.852 }, 00:37:46.852 "method": "bdev_nvme_attach_controller" 00:37:46.852 } 00:37:46.852 EOF 00:37:46.852 )") 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:46.852 08:36:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:46.852 { 00:37:46.852 "params": { 00:37:46.852 "name": "Nvme$subsystem", 00:37:46.852 "trtype": "$TEST_TRANSPORT", 00:37:46.852 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:46.852 "adrfam": "ipv4", 00:37:46.852 "trsvcid": "$NVMF_PORT", 00:37:46.852 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:46.852 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:46.852 "hdgst": ${hdgst:-false}, 00:37:46.852 "ddgst": ${ddgst:-false} 00:37:46.852 }, 00:37:46.852 "method": "bdev_nvme_attach_controller" 00:37:46.852 } 00:37:46.852 EOF 00:37:46.852 )") 00:37:46.853 08:36:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:46.853 08:36:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:46.853 08:36:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:46.853 08:36:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:46.853 "params": { 00:37:46.853 "name": "Nvme0", 00:37:46.853 "trtype": "tcp", 00:37:46.853 "traddr": "10.0.0.2", 00:37:46.853 "adrfam": "ipv4", 00:37:46.853 "trsvcid": "4420", 00:37:46.853 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:46.853 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:46.853 "hdgst": false, 00:37:46.853 "ddgst": false 00:37:46.853 }, 00:37:46.853 "method": "bdev_nvme_attach_controller" 00:37:46.853 },{ 00:37:46.853 "params": { 00:37:46.853 "name": "Nvme1", 00:37:46.853 "trtype": "tcp", 00:37:46.853 "traddr": "10.0.0.2", 00:37:46.853 "adrfam": "ipv4", 00:37:46.853 "trsvcid": "4420", 00:37:46.853 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:46.853 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:46.853 "hdgst": false, 00:37:46.853 "ddgst": false 00:37:46.853 }, 00:37:46.853 "method": "bdev_nvme_attach_controller" 00:37:46.853 },{ 00:37:46.853 "params": { 00:37:46.853 "name": "Nvme2", 00:37:46.853 "trtype": "tcp", 00:37:46.853 "traddr": "10.0.0.2", 00:37:46.853 "adrfam": "ipv4", 00:37:46.853 "trsvcid": "4420", 00:37:46.853 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:46.853 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:46.853 "hdgst": false, 00:37:46.853 "ddgst": false 00:37:46.853 }, 00:37:46.853 "method": "bdev_nvme_attach_controller" 00:37:46.853 }' 00:37:46.853 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:46.853 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:46.853 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:46.853 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:46.853 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:46.853 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:46.853 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:46.853 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:46.853 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:46.853 08:36:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:46.853 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:46.853 ... 00:37:46.853 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:46.853 ... 00:37:46.853 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:46.853 ... 00:37:46.853 fio-3.35 00:37:46.853 Starting 24 threads 00:37:59.098 00:37:59.098 filename0: (groupid=0, jobs=1): err= 0: pid=2280716: Thu Nov 28 08:36:54 2024 00:37:59.098 read: IOPS=680, BW=2720KiB/s (2786kB/s)(26.6MiB/10016msec) 00:37:59.098 slat (nsec): min=5698, max=99063, avg=12353.03, stdev=11160.21 00:37:59.098 clat (usec): min=9152, max=43941, avg=23440.48, stdev=4271.34 00:37:59.098 lat (usec): min=9158, max=43947, avg=23452.83, stdev=4272.91 00:37:59.098 clat percentiles (usec): 00:37:59.098 | 1.00th=[12387], 5.00th=[16057], 10.00th=[17171], 20.00th=[20317], 00:37:59.098 | 30.00th=[23725], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:37:59.098 | 70.00th=[24249], 80.00th=[24511], 90.00th=[27132], 95.00th=[30802], 00:37:59.098 | 99.00th=[37487], 99.50th=[39584], 99.90th=[41157], 99.95th=[43779], 00:37:59.098 | 99.99th=[43779] 00:37:59.098 bw ( KiB/s): min= 2475, max= 3008, per=4.29%, avg=2723.32, stdev=130.83, samples=19 00:37:59.098 iops : min= 618, max= 752, avg=680.74, stdev=32.80, samples=19 00:37:59.098 lat (msec) : 10=0.23%, 20=17.15%, 50=82.62% 00:37:59.098 cpu : usr=98.66%, sys=0.96%, ctx=78, majf=0, minf=10 00:37:59.098 IO depths : 1=1.2%, 2=2.4%, 4=9.0%, 8=74.2%, 16=13.1%, 32=0.0%, >=64=0.0% 00:37:59.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.098 complete : 0=0.0%, 4=90.3%, 8=5.8%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.098 issued rwts: total=6812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:59.098 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:59.098 filename0: (groupid=0, jobs=1): err= 0: pid=2280717: Thu Nov 28 08:36:54 2024 00:37:59.098 read: IOPS=643, BW=2573KiB/s (2634kB/s)(25.1MiB/10001msec) 00:37:59.098 slat (nsec): min=5690, max=91766, avg=24413.55, stdev=15360.39 00:37:59.098 clat (usec): min=10382, max=54709, avg=24651.29, stdev=2570.84 00:37:59.098 lat (usec): min=10389, max=54727, avg=24675.71, stdev=2569.75 00:37:59.098 clat percentiles (usec): 00:37:59.098 | 1.00th=[22938], 5.00th=[23725], 10.00th=[23725], 20.00th=[23725], 00:37:59.098 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:37:59.098 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25297], 95.00th=[30802], 00:37:59.098 | 99.00th=[33162], 99.50th=[34341], 99.90th=[54789], 99.95th=[54789], 00:37:59.098 | 99.99th=[54789] 00:37:59.098 bw ( KiB/s): min= 2299, max= 2688, per=4.04%, avg=2565.21, stdev=100.25, samples=19 00:37:59.098 iops : min= 574, max= 672, avg=641.16, stdev=25.15, samples=19 00:37:59.098 lat (msec) : 20=0.50%, 50=99.25%, 100=0.25% 00:37:59.098 cpu : usr=98.91%, sys=0.78%, ctx=67, majf=0, minf=9 00:37:59.098 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:59.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.098 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.098 issued rwts: total=6432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:59.098 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:59.098 filename0: (groupid=0, jobs=1): err= 0: pid=2280718: Thu Nov 28 08:36:54 2024 00:37:59.098 read: IOPS=665, BW=2662KiB/s (2726kB/s)(26.0MiB/10006msec) 00:37:59.098 slat (nsec): min=5695, max=99395, avg=21639.45, stdev=15740.61 00:37:59.098 clat (usec): min=11252, max=41497, avg=23853.70, stdev=3293.72 00:37:59.098 lat (usec): min=11260, max=41503, avg=23875.34, stdev=3295.16 00:37:59.098 clat percentiles (usec): 00:37:59.098 | 1.00th=[14877], 5.00th=[16581], 10.00th=[20317], 20.00th=[23725], 00:37:59.098 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:37:59.098 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25035], 95.00th=[29754], 00:37:59.098 | 99.00th=[35914], 99.50th=[37487], 99.90th=[38011], 99.95th=[41681], 00:37:59.098 | 99.99th=[41681] 00:37:59.098 bw ( KiB/s): min= 2560, max= 2864, per=4.19%, avg=2660.84, stdev=78.44, samples=19 00:37:59.098 iops : min= 640, max= 716, avg=665.05, stdev=19.55, samples=19 00:37:59.098 lat (msec) : 20=9.23%, 50=90.77% 00:37:59.098 cpu : usr=98.65%, sys=0.94%, ctx=51, majf=0, minf=9 00:37:59.098 IO depths : 1=2.6%, 2=6.8%, 4=19.5%, 8=60.9%, 16=10.2%, 32=0.0%, >=64=0.0% 00:37:59.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.098 complete : 0=0.0%, 4=93.0%, 8=1.6%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.098 issued rwts: total=6660,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:59.098 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:59.098 filename0: (groupid=0, jobs=1): err= 0: pid=2280719: Thu Nov 28 08:36:54 2024 00:37:59.098 read: IOPS=660, BW=2641KiB/s (2704kB/s)(25.8MiB/10010msec) 00:37:59.098 slat (usec): min=5, max=107, avg=26.01, stdev=16.74 00:37:59.098 clat (usec): min=11701, max=25828, avg=24003.17, stdev=1099.45 00:37:59.098 lat (usec): min=11712, max=25860, avg=24029.19, stdev=1098.65 00:37:59.098 clat percentiles (usec): 00:37:59.098 | 1.00th=[22676], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:37:59.098 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:37:59.098 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:59.098 | 99.00th=[25297], 99.50th=[25560], 99.90th=[25560], 99.95th=[25822], 00:37:59.098 | 99.99th=[25822] 00:37:59.098 bw ( KiB/s): min= 2554, max= 2816, per=4.15%, avg=2639.89, stdev=76.98, samples=19 00:37:59.098 iops : min= 638, max= 704, avg=659.89, stdev=19.29, samples=19 00:37:59.098 lat (msec) : 20=0.97%, 50=99.03% 00:37:59.098 cpu : usr=98.93%, sys=0.75%, ctx=36, majf=0, minf=9 00:37:59.098 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:59.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.098 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.098 issued rwts: total=6608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:59.098 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:59.098 filename0: (groupid=0, jobs=1): err= 0: pid=2280720: Thu Nov 28 08:36:54 2024 00:37:59.098 read: IOPS=658, BW=2636KiB/s (2699kB/s)(25.8MiB/10004msec) 00:37:59.098 slat (nsec): min=5702, max=68527, avg=12597.78, stdev=9173.63 00:37:59.098 clat (usec): min=14187, max=30553, avg=24177.51, stdev=875.31 00:37:59.098 lat (usec): min=14196, max=30560, avg=24190.11, stdev=874.55 00:37:59.098 clat percentiles (usec): 00:37:59.098 | 1.00th=[22676], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:37:59.098 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:37:59.098 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:59.099 | 99.00th=[25822], 99.50th=[25822], 99.90th=[30278], 99.95th=[30540], 00:37:59.099 | 99.99th=[30540] 00:37:59.099 bw ( KiB/s): min= 2554, max= 2688, per=4.15%, avg=2633.42, stdev=65.15, samples=19 00:37:59.099 iops : min= 638, max= 672, avg=658.26, stdev=16.35, samples=19 00:37:59.099 lat (msec) : 20=0.88%, 50=99.12% 00:37:59.099 cpu : usr=98.51%, sys=0.94%, ctx=156, majf=0, minf=9 00:37:59.099 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:59.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.099 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.099 issued rwts: total=6592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:59.099 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:59.099 filename0: (groupid=0, jobs=1): err= 0: pid=2280721: Thu Nov 28 08:36:54 2024 00:37:59.099 read: IOPS=658, BW=2634KiB/s (2697kB/s)(25.8MiB/10010msec) 00:37:59.099 slat (nsec): min=5778, max=94593, avg=25980.13, stdev=15188.22 00:37:59.099 clat (usec): min=11373, max=30856, avg=24070.45, stdev=945.58 00:37:59.099 lat (usec): min=11380, max=30874, avg=24096.43, stdev=945.73 00:37:59.099 clat percentiles (usec): 00:37:59.099 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:37:59.099 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:37:59.099 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:59.099 | 99.00th=[25560], 99.50th=[25560], 99.90th=[30802], 99.95th=[30802], 00:37:59.099 | 99.99th=[30802] 00:37:59.099 bw ( KiB/s): min= 2560, max= 2693, per=4.13%, avg=2627.00, stdev=65.34, samples=19 00:37:59.099 iops : min= 640, max= 673, avg=656.68, stdev=16.28, samples=19 00:37:59.099 lat (msec) : 20=0.49%, 50=99.51% 00:37:59.099 cpu : usr=98.38%, sys=1.11%, ctx=133, majf=0, minf=9 00:37:59.099 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:59.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.099 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.099 issued rwts: total=6592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:59.099 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:59.099 filename0: (groupid=0, jobs=1): err= 0: pid=2280722: Thu Nov 28 08:36:54 2024 00:37:59.099 read: IOPS=653, BW=2614KiB/s (2677kB/s)(25.5MiB/10008msec) 00:37:59.099 slat (nsec): min=5700, max=96130, avg=23518.71, stdev=16323.75 00:37:59.099 clat (usec): min=12903, max=38658, avg=24270.93, stdev=2122.40 00:37:59.099 lat (usec): min=12912, max=38664, avg=24294.45, stdev=2122.22 00:37:59.099 clat percentiles (usec): 00:37:59.099 | 1.00th=[16581], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:37:59.099 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:37:59.099 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25035], 95.00th=[25822], 00:37:59.099 | 99.00th=[32113], 99.50th=[32637], 99.90th=[38536], 99.95th=[38536], 00:37:59.099 | 99.99th=[38536] 00:37:59.099 bw ( KiB/s): min= 2432, max= 2784, per=4.11%, avg=2610.95, stdev=84.15, samples=19 00:37:59.099 iops : min= 608, max= 696, avg=652.63, stdev=20.99, samples=19 00:37:59.099 lat (msec) : 20=2.60%, 50=97.40% 00:37:59.099 cpu : usr=97.44%, sys=1.51%, ctx=936, majf=0, minf=9 00:37:59.099 IO depths : 1=5.9%, 2=12.0%, 4=24.5%, 8=50.9%, 16=6.6%, 32=0.0%, >=64=0.0% 00:37:59.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.099 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.099 issued rwts: total=6540,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:59.099 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:59.099 filename0: (groupid=0, jobs=1): err= 0: pid=2280723: Thu Nov 28 08:36:54 2024 00:37:59.099 read: IOPS=660, BW=2643KiB/s (2706kB/s)(25.8MiB/10011msec) 00:37:59.099 slat (nsec): min=5707, max=84984, avg=12378.27, stdev=9740.49 00:37:59.099 clat (usec): min=11698, max=33173, avg=24114.96, stdev=1398.43 00:37:59.099 lat (usec): min=11738, max=33179, avg=24127.34, stdev=1397.60 00:37:59.099 clat percentiles (usec): 00:37:59.099 | 1.00th=[16450], 5.00th=[23725], 10.00th=[23725], 20.00th=[23987], 00:37:59.099 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:37:59.099 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:59.099 | 99.00th=[25560], 99.50th=[30016], 99.90th=[31851], 99.95th=[32113], 00:37:59.099 | 99.99th=[33162] 00:37:59.099 bw ( KiB/s): min= 2554, max= 2864, per=4.16%, avg=2642.42, stdev=92.78, samples=19 00:37:59.099 iops : min= 638, max= 716, avg=660.53, stdev=23.24, samples=19 00:37:59.099 lat (msec) : 20=1.83%, 50=98.17% 00:37:59.099 cpu : usr=98.58%, sys=0.96%, ctx=129, majf=0, minf=9 00:37:59.099 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:59.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.099 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.099 issued rwts: total=6614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:59.099 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:59.099 filename1: (groupid=0, jobs=1): err= 0: pid=2280724: Thu Nov 28 08:36:54 2024 00:37:59.099 read: IOPS=662, BW=2651KiB/s (2715kB/s)(25.9MiB/10017msec) 00:37:59.099 slat (usec): min=5, max=100, avg=27.01, stdev=16.09 00:37:59.099 clat (usec): min=4046, max=33607, avg=23898.40, stdev=1936.18 00:37:59.099 lat (usec): min=4060, max=33614, avg=23925.42, stdev=1936.24 00:37:59.099 clat percentiles (usec): 00:37:59.099 | 1.00th=[13435], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:37:59.099 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:37:59.099 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:59.099 | 99.00th=[25560], 99.50th=[28967], 99.90th=[33424], 99.95th=[33424], 00:37:59.099 | 99.99th=[33817] 00:37:59.099 bw ( KiB/s): min= 2554, max= 3072, per=4.18%, avg=2653.37, stdev=119.58, samples=19 00:37:59.099 iops : min= 638, max= 768, avg=663.26, stdev=29.91, samples=19 00:37:59.099 lat (msec) : 10=0.48%, 20=1.55%, 50=97.97% 00:37:59.099 cpu : usr=98.40%, sys=1.08%, ctx=201, majf=0, minf=9 00:37:59.099 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:59.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.099 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.099 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:59.099 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:59.099 filename1: (groupid=0, jobs=1): err= 0: pid=2280725: Thu Nov 28 08:36:54 2024 00:37:59.099 read: IOPS=657, BW=2632KiB/s (2695kB/s)(25.7MiB/10010msec) 00:37:59.099 slat (nsec): min=5711, max=63650, avg=15226.08, stdev=8720.47 00:37:59.099 clat (usec): min=11511, max=37689, avg=24181.43, stdev=1197.70 00:37:59.099 lat (usec): min=11517, max=37711, avg=24196.65, stdev=1197.63 00:37:59.099 clat percentiles (usec): 00:37:59.099 | 1.00th=[22676], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:37:59.099 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:37:59.099 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:59.099 | 99.00th=[25822], 99.50th=[26084], 99.90th=[37487], 99.95th=[37487], 00:37:59.099 | 99.99th=[37487] 00:37:59.099 bw ( KiB/s): min= 2554, max= 2688, per=4.13%, avg=2625.47, stdev=65.83, samples=19 00:37:59.099 iops : min= 638, max= 672, avg=656.21, stdev=16.48, samples=19 00:37:59.099 lat (msec) : 20=0.76%, 50=99.24% 00:37:59.099 cpu : usr=99.03%, sys=0.68%, ctx=62, majf=0, minf=9 00:37:59.099 IO depths : 1=5.9%, 2=12.2%, 4=24.9%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:37:59.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.099 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.099 issued rwts: total=6586,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:59.099 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:59.099 filename1: (groupid=0, jobs=1): err= 0: pid=2280726: Thu Nov 28 08:36:54 2024 00:37:59.099 read: IOPS=667, BW=2669KiB/s (2733kB/s)(26.1MiB/10022msec) 00:37:59.099 slat (nsec): min=5718, max=91129, avg=10679.05, stdev=8466.66 00:37:59.099 clat (usec): min=3834, max=26001, avg=23887.56, stdev=2121.27 00:37:59.099 lat (usec): min=3850, max=26007, avg=23898.24, stdev=2120.32 00:37:59.099 clat percentiles (usec): 00:37:59.099 | 1.00th=[13566], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:37:59.099 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:37:59.099 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:59.099 | 99.00th=[25560], 99.50th=[25822], 99.90th=[26084], 99.95th=[26084], 00:37:59.099 | 99.99th=[26084] 00:37:59.099 bw ( KiB/s): min= 2554, max= 3072, per=4.20%, avg=2667.60, stdev=119.62, samples=20 00:37:59.099 iops : min= 638, max= 768, avg=666.80, stdev=29.92, samples=20 00:37:59.099 lat (msec) : 4=0.04%, 10=0.43%, 20=2.87%, 50=96.65% 00:37:59.099 cpu : usr=99.23%, sys=0.50%, ctx=13, majf=0, minf=9 00:37:59.099 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:59.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.099 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.099 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:59.099 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:59.099 filename1: (groupid=0, jobs=1): err= 0: pid=2280727: Thu Nov 28 08:36:54 2024 00:37:59.099 read: IOPS=658, BW=2632KiB/s (2695kB/s)(25.7MiB/10009msec) 00:37:59.099 slat (nsec): min=5714, max=75592, avg=17908.20, stdev=10972.90 00:37:59.099 clat (usec): min=11496, max=37240, avg=24141.47, stdev=1547.59 00:37:59.099 lat (usec): min=11505, max=37248, avg=24159.38, stdev=1547.50 00:37:59.099 clat percentiles (usec): 00:37:59.099 | 1.00th=[16319], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:37:59.099 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:37:59.099 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:59.099 | 99.00th=[30278], 99.50th=[32900], 99.90th=[36963], 99.95th=[36963], 00:37:59.099 | 99.99th=[37487] 00:37:59.099 bw ( KiB/s): min= 2554, max= 2688, per=4.13%, avg=2625.74, stdev=62.36, samples=19 00:37:59.099 iops : min= 638, max= 672, avg=656.26, stdev=15.62, samples=19 00:37:59.099 lat (msec) : 20=1.56%, 50=98.44% 00:37:59.099 cpu : usr=98.96%, sys=0.77%, ctx=5, majf=0, minf=9 00:37:59.099 IO depths : 1=5.9%, 2=11.9%, 4=24.4%, 8=51.2%, 16=6.6%, 32=0.0%, >=64=0.0% 00:37:59.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.099 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.099 issued rwts: total=6586,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:59.099 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:59.099 filename1: (groupid=0, jobs=1): err= 0: pid=2280728: Thu Nov 28 08:36:54 2024 00:37:59.099 read: IOPS=663, BW=2654KiB/s (2718kB/s)(25.9MiB/10002msec) 00:37:59.099 slat (nsec): min=5648, max=68658, avg=15850.90, stdev=11068.86 00:37:59.099 clat (usec): min=3834, max=43749, avg=24004.91, stdev=3155.71 00:37:59.099 lat (usec): min=3840, max=43769, avg=24020.76, stdev=3156.33 00:37:59.099 clat percentiles (usec): 00:37:59.099 | 1.00th=[13829], 5.00th=[17695], 10.00th=[22938], 20.00th=[23725], 00:37:59.099 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:37:59.099 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25035], 95.00th=[27395], 00:37:59.099 | 99.00th=[34341], 99.50th=[35914], 99.90th=[43779], 99.95th=[43779], 00:37:59.099 | 99.99th=[43779] 00:37:59.099 bw ( KiB/s): min= 2436, max= 2800, per=4.16%, avg=2640.32, stdev=85.77, samples=19 00:37:59.099 iops : min= 609, max= 700, avg=659.95, stdev=21.43, samples=19 00:37:59.099 lat (msec) : 4=0.15%, 10=0.12%, 20=6.69%, 50=93.04% 00:37:59.099 cpu : usr=98.93%, sys=0.79%, ctx=21, majf=0, minf=9 00:37:59.099 IO depths : 1=2.3%, 2=5.6%, 4=13.9%, 8=65.9%, 16=12.2%, 32=0.0%, >=64=0.0% 00:37:59.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.100 complete : 0=0.0%, 4=91.8%, 8=4.5%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.100 issued rwts: total=6636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:59.100 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:59.100 filename1: (groupid=0, jobs=1): err= 0: pid=2280729: Thu Nov 28 08:36:54 2024 00:37:59.100 read: IOPS=658, BW=2635KiB/s (2698kB/s)(25.8MiB/10008msec) 00:37:59.100 slat (usec): min=5, max=100, avg=24.18, stdev=17.86 00:37:59.100 clat (usec): min=13202, max=26281, avg=24089.19, stdev=839.90 00:37:59.100 lat (usec): min=13235, max=26313, avg=24113.37, stdev=837.59 00:37:59.100 clat percentiles (usec): 00:37:59.100 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:37:59.100 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:37:59.100 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:59.100 | 99.00th=[25560], 99.50th=[25560], 99.90th=[26084], 99.95th=[26346], 00:37:59.100 | 99.99th=[26346] 00:37:59.100 bw ( KiB/s): min= 2554, max= 2688, per=4.15%, avg=2633.11, stdev=65.54, samples=19 00:37:59.100 iops : min= 638, max= 672, avg=658.16, stdev=16.48, samples=19 00:37:59.100 lat (msec) : 20=0.49%, 50=99.51% 00:37:59.100 cpu : usr=98.49%, sys=1.00%, ctx=123, majf=0, minf=9 00:37:59.100 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:59.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.100 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.100 issued rwts: total=6592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:59.100 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:59.100 filename1: (groupid=0, jobs=1): err= 0: pid=2280730: Thu Nov 28 08:36:54 2024 00:37:59.100 read: IOPS=659, BW=2639KiB/s (2702kB/s)(25.8MiB/10002msec) 00:37:59.100 slat (usec): min=5, max=103, avg=17.27, stdev=15.97 00:37:59.100 clat (usec): min=3818, max=65882, avg=24173.58, stdev=3572.57 00:37:59.100 lat (usec): min=3824, max=65903, avg=24190.84, stdev=3573.32 00:37:59.100 clat percentiles (usec): 00:37:59.100 | 1.00th=[17171], 5.00th=[19006], 10.00th=[19792], 20.00th=[22676], 00:37:59.100 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:37:59.100 | 70.00th=[24511], 80.00th=[25297], 90.00th=[28181], 95.00th=[29754], 00:37:59.100 | 99.00th=[34866], 99.50th=[36963], 99.90th=[54264], 99.95th=[54264], 00:37:59.100 | 99.99th=[65799] 00:37:59.100 bw ( KiB/s): min= 2324, max= 2720, per=4.14%, avg=2631.89, stdev=89.87, samples=19 00:37:59.100 iops : min= 581, max= 680, avg=657.84, stdev=22.43, samples=19 00:37:59.100 lat (msec) : 4=0.11%, 10=0.20%, 20=11.50%, 50=87.95%, 100=0.24% 00:37:59.100 cpu : usr=98.71%, sys=0.86%, ctx=57, majf=0, minf=9 00:37:59.100 IO depths : 1=0.1%, 2=0.1%, 4=3.9%, 8=79.8%, 16=16.3%, 32=0.0%, >=64=0.0% 00:37:59.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.100 complete : 0=0.0%, 4=89.4%, 8=8.6%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.100 issued rwts: total=6599,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:59.100 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:59.100 filename1: (groupid=0, jobs=1): err= 0: pid=2280731: Thu Nov 28 08:36:54 2024 00:37:59.100 read: IOPS=659, BW=2636KiB/s (2700kB/s)(25.8MiB/10002msec) 00:37:59.100 slat (nsec): min=5621, max=65988, avg=15490.74, stdev=10000.65 00:37:59.100 clat (usec): min=3840, max=43173, avg=24123.43, stdev=1651.02 00:37:59.100 lat (usec): min=3846, max=43193, avg=24138.92, stdev=1651.26 00:37:59.100 clat percentiles (usec): 00:37:59.100 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:37:59.100 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:37:59.100 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:59.100 | 99.00th=[25560], 99.50th=[25822], 99.90th=[43254], 99.95th=[43254], 00:37:59.100 | 99.99th=[43254] 00:37:59.100 bw ( KiB/s): min= 2436, max= 2688, per=4.13%, avg=2626.00, stdev=77.61, samples=19 00:37:59.100 iops : min= 609, max= 672, avg=656.37, stdev=19.39, samples=19 00:37:59.100 lat (msec) : 4=0.03%, 10=0.21%, 20=0.49%, 50=99.27% 00:37:59.100 cpu : usr=98.76%, sys=0.83%, ctx=77, majf=0, minf=9 00:37:59.100 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:59.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.100 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.100 issued rwts: total=6592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:59.100 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:59.100 filename2: (groupid=0, jobs=1): err= 0: pid=2280732: Thu Nov 28 08:36:54 2024 00:37:59.100 read: IOPS=702, BW=2810KiB/s (2877kB/s)(27.5MiB/10022msec) 00:37:59.100 slat (nsec): min=5704, max=77463, avg=7469.25, stdev=3819.35 00:37:59.100 clat (usec): min=4194, max=25384, avg=22711.47, stdev=3345.31 00:37:59.100 lat (usec): min=4211, max=25390, avg=22718.93, stdev=3344.90 00:37:59.100 clat percentiles (usec): 00:37:59.100 | 1.00th=[12125], 5.00th=[15664], 10.00th=[16450], 20.00th=[23462], 00:37:59.100 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:37:59.100 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:37:59.100 | 99.00th=[25297], 99.50th=[25297], 99.90th=[25297], 99.95th=[25297], 00:37:59.100 | 99.99th=[25297] 00:37:59.100 bw ( KiB/s): min= 2554, max= 3200, per=4.42%, avg=2808.40, stdev=158.22, samples=20 00:37:59.100 iops : min= 638, max= 800, avg=702.00, stdev=39.58, samples=20 00:37:59.100 lat (msec) : 10=0.45%, 20=17.73%, 50=81.82% 00:37:59.100 cpu : usr=98.77%, sys=0.81%, ctx=38, majf=0, minf=9 00:37:59.100 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:59.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.100 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.100 issued rwts: total=7040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:59.100 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:59.100 filename2: (groupid=0, jobs=1): err= 0: pid=2280733: Thu Nov 28 08:36:54 2024 00:37:59.100 read: IOPS=658, BW=2635KiB/s (2698kB/s)(25.8MiB/10008msec) 00:37:59.100 slat (nsec): min=5720, max=54170, avg=11104.03, stdev=7729.09 00:37:59.100 clat (usec): min=13780, max=32074, avg=24196.44, stdev=768.10 00:37:59.100 lat (usec): min=13811, max=32082, avg=24207.54, stdev=767.43 00:37:59.100 clat percentiles (usec): 00:37:59.100 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:37:59.100 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24249], 60.00th=[24249], 00:37:59.100 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:59.100 | 99.00th=[25560], 99.50th=[25822], 99.90th=[26084], 99.95th=[26084], 00:37:59.100 | 99.99th=[32113] 00:37:59.100 bw ( KiB/s): min= 2554, max= 2688, per=4.15%, avg=2633.95, stdev=64.54, samples=19 00:37:59.100 iops : min= 638, max= 672, avg=658.37, stdev=16.23, samples=19 00:37:59.100 lat (msec) : 20=0.52%, 50=99.48% 00:37:59.100 cpu : usr=98.92%, sys=0.69%, ctx=38, majf=0, minf=9 00:37:59.100 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:59.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.100 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.100 issued rwts: total=6592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:59.100 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:59.100 filename2: (groupid=0, jobs=1): err= 0: pid=2280734: Thu Nov 28 08:36:54 2024 00:37:59.100 read: IOPS=659, BW=2636KiB/s (2699kB/s)(25.8MiB/10003msec) 00:37:59.100 slat (nsec): min=5715, max=53634, avg=12253.73, stdev=7434.69 00:37:59.100 clat (usec): min=3653, max=54425, avg=24174.21, stdev=1745.69 00:37:59.100 lat (usec): min=3659, max=54448, avg=24186.46, stdev=1745.58 00:37:59.100 clat percentiles (usec): 00:37:59.100 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:37:59.100 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:37:59.100 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:59.100 | 99.00th=[25822], 99.50th=[26084], 99.90th=[43779], 99.95th=[43779], 00:37:59.100 | 99.99th=[54264] 00:37:59.100 bw ( KiB/s): min= 2432, max= 2688, per=4.13%, avg=2625.79, stdev=78.15, samples=19 00:37:59.100 iops : min= 608, max= 672, avg=656.32, stdev=19.53, samples=19 00:37:59.100 lat (msec) : 4=0.03%, 10=0.21%, 20=0.58%, 50=99.15%, 100=0.03% 00:37:59.100 cpu : usr=98.52%, sys=1.00%, ctx=203, majf=0, minf=9 00:37:59.100 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:59.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.100 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.100 issued rwts: total=6592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:59.100 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:59.100 filename2: (groupid=0, jobs=1): err= 0: pid=2280735: Thu Nov 28 08:36:54 2024 00:37:59.100 read: IOPS=665, BW=2661KiB/s (2724kB/s)(26.0MiB/10010msec) 00:37:59.100 slat (usec): min=5, max=101, avg=24.37, stdev=17.58 00:37:59.100 clat (usec): min=11655, max=39542, avg=23853.83, stdev=2344.00 00:37:59.100 lat (usec): min=11667, max=39550, avg=23878.21, stdev=2345.32 00:37:59.100 clat percentiles (usec): 00:37:59.100 | 1.00th=[14615], 5.00th=[19006], 10.00th=[23462], 20.00th=[23725], 00:37:59.100 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:37:59.100 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:59.100 | 99.00th=[31327], 99.50th=[33424], 99.90th=[39060], 99.95th=[39060], 00:37:59.100 | 99.99th=[39584] 00:37:59.100 bw ( KiB/s): min= 2554, max= 2816, per=4.17%, avg=2649.16, stdev=76.80, samples=19 00:37:59.100 iops : min= 638, max= 704, avg=662.21, stdev=19.25, samples=19 00:37:59.100 lat (msec) : 20=5.41%, 50=94.59% 00:37:59.100 cpu : usr=98.87%, sys=0.85%, ctx=12, majf=0, minf=9 00:37:59.100 IO depths : 1=3.4%, 2=9.0%, 4=23.3%, 8=55.1%, 16=9.2%, 32=0.0%, >=64=0.0% 00:37:59.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.100 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.100 issued rwts: total=6658,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:59.100 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:59.100 filename2: (groupid=0, jobs=1): err= 0: pid=2280736: Thu Nov 28 08:36:54 2024 00:37:59.100 read: IOPS=663, BW=2655KiB/s (2719kB/s)(26.0MiB/10014msec) 00:37:59.100 slat (nsec): min=5724, max=94448, avg=24456.84, stdev=16251.93 00:37:59.100 clat (usec): min=13090, max=36185, avg=23875.11, stdev=1658.05 00:37:59.100 lat (usec): min=13118, max=36207, avg=23899.57, stdev=1659.02 00:37:59.100 clat percentiles (usec): 00:37:59.100 | 1.00th=[14746], 5.00th=[22938], 10.00th=[23462], 20.00th=[23725], 00:37:59.100 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:37:59.100 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:59.101 | 99.00th=[25822], 99.50th=[29230], 99.90th=[35914], 99.95th=[35914], 00:37:59.101 | 99.99th=[36439] 00:37:59.101 bw ( KiB/s): min= 2560, max= 2856, per=4.18%, avg=2656.42, stdev=74.41, samples=19 00:37:59.101 iops : min= 640, max= 714, avg=664.00, stdev=18.58, samples=19 00:37:59.101 lat (msec) : 20=3.50%, 50=96.50% 00:37:59.101 cpu : usr=98.60%, sys=0.95%, ctx=163, majf=0, minf=9 00:37:59.101 IO depths : 1=5.9%, 2=11.9%, 4=24.1%, 8=51.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:37:59.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.101 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.101 issued rwts: total=6648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:59.101 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:59.101 filename2: (groupid=0, jobs=1): err= 0: pid=2280737: Thu Nov 28 08:36:54 2024 00:37:59.101 read: IOPS=657, BW=2630KiB/s (2693kB/s)(25.7MiB/10003msec) 00:37:59.101 slat (nsec): min=5719, max=74376, avg=17532.04, stdev=11599.62 00:37:59.101 clat (usec): min=13166, max=42602, avg=24183.52, stdev=1210.68 00:37:59.101 lat (usec): min=13172, max=42619, avg=24201.05, stdev=1210.26 00:37:59.101 clat percentiles (usec): 00:37:59.101 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:37:59.101 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:37:59.101 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:59.101 | 99.00th=[25822], 99.50th=[25822], 99.90th=[42730], 99.95th=[42730], 00:37:59.101 | 99.99th=[42730] 00:37:59.101 bw ( KiB/s): min= 2432, max= 2688, per=4.13%, avg=2626.11, stdev=78.40, samples=19 00:37:59.101 iops : min= 608, max= 672, avg=656.42, stdev=19.61, samples=19 00:37:59.101 lat (msec) : 20=0.52%, 50=99.48% 00:37:59.101 cpu : usr=99.19%, sys=0.51%, ctx=38, majf=0, minf=9 00:37:59.101 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:59.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.101 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.101 issued rwts: total=6576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:59.101 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:59.101 filename2: (groupid=0, jobs=1): err= 0: pid=2280738: Thu Nov 28 08:36:54 2024 00:37:59.101 read: IOPS=663, BW=2655KiB/s (2718kB/s)(25.9MiB/10002msec) 00:37:59.101 slat (nsec): min=5700, max=68059, avg=18121.46, stdev=11506.56 00:37:59.101 clat (usec): min=4936, max=53573, avg=23949.25, stdev=2993.44 00:37:59.101 lat (usec): min=4942, max=53593, avg=23967.37, stdev=2994.31 00:37:59.101 clat percentiles (usec): 00:37:59.101 | 1.00th=[14222], 5.00th=[18482], 10.00th=[23200], 20.00th=[23725], 00:37:59.101 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:37:59.101 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25035], 95.00th=[25560], 00:37:59.101 | 99.00th=[34866], 99.50th=[37487], 99.90th=[43254], 99.95th=[43254], 00:37:59.101 | 99.99th=[53740] 00:37:59.101 bw ( KiB/s): min= 2436, max= 2832, per=4.16%, avg=2645.37, stdev=85.66, samples=19 00:37:59.101 iops : min= 609, max= 708, avg=661.21, stdev=21.42, samples=19 00:37:59.101 lat (msec) : 10=0.24%, 20=6.16%, 50=93.57%, 100=0.03% 00:37:59.101 cpu : usr=98.91%, sys=0.76%, ctx=94, majf=0, minf=9 00:37:59.101 IO depths : 1=4.6%, 2=9.7%, 4=21.5%, 8=56.0%, 16=8.2%, 32=0.0%, >=64=0.0% 00:37:59.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.101 complete : 0=0.0%, 4=93.2%, 8=1.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.101 issued rwts: total=6638,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:59.101 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:59.101 filename2: (groupid=0, jobs=1): err= 0: pid=2280739: Thu Nov 28 08:36:54 2024 00:37:59.101 read: IOPS=661, BW=2646KiB/s (2710kB/s)(25.9MiB/10012msec) 00:37:59.101 slat (nsec): min=5700, max=75112, avg=12595.52, stdev=11238.18 00:37:59.101 clat (usec): min=4776, max=29243, avg=24079.86, stdev=1505.25 00:37:59.101 lat (usec): min=4785, max=29249, avg=24092.45, stdev=1503.84 00:37:59.101 clat percentiles (usec): 00:37:59.101 | 1.00th=[14746], 5.00th=[23725], 10.00th=[23725], 20.00th=[23987], 00:37:59.101 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:37:59.101 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:37:59.101 | 99.00th=[25560], 99.50th=[25560], 99.90th=[25822], 99.95th=[25822], 00:37:59.101 | 99.99th=[29230] 00:37:59.101 bw ( KiB/s): min= 2554, max= 2944, per=4.17%, avg=2646.63, stdev=96.40, samples=19 00:37:59.101 iops : min= 638, max= 736, avg=661.58, stdev=24.14, samples=19 00:37:59.101 lat (msec) : 10=0.24%, 20=1.00%, 50=98.76% 00:37:59.101 cpu : usr=98.98%, sys=0.67%, ctx=61, majf=0, minf=9 00:37:59.101 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:59.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.101 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:59.101 issued rwts: total=6624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:59.101 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:59.101 00:37:59.101 Run status group 0 (all jobs): 00:37:59.101 READ: bw=62.0MiB/s (65.0MB/s), 2573KiB/s-2810KiB/s (2634kB/s-2877kB/s), io=622MiB (652MB), run=10001-10022msec 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:59.101 bdev_null0 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.101 08:36:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:59.101 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.101 08:36:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:59.101 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.101 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:59.101 [2024-11-28 08:36:55.010698] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:59.101 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:59.102 bdev_null1 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:59.102 { 00:37:59.102 "params": { 00:37:59.102 "name": "Nvme$subsystem", 00:37:59.102 "trtype": "$TEST_TRANSPORT", 00:37:59.102 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:59.102 "adrfam": "ipv4", 00:37:59.102 "trsvcid": "$NVMF_PORT", 00:37:59.102 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:59.102 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:59.102 "hdgst": ${hdgst:-false}, 00:37:59.102 "ddgst": ${ddgst:-false} 00:37:59.102 }, 00:37:59.102 "method": "bdev_nvme_attach_controller" 00:37:59.102 } 00:37:59.102 EOF 00:37:59.102 )") 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:59.102 { 00:37:59.102 "params": { 00:37:59.102 "name": "Nvme$subsystem", 00:37:59.102 "trtype": "$TEST_TRANSPORT", 00:37:59.102 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:59.102 "adrfam": "ipv4", 00:37:59.102 "trsvcid": "$NVMF_PORT", 00:37:59.102 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:59.102 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:59.102 "hdgst": ${hdgst:-false}, 00:37:59.102 "ddgst": ${ddgst:-false} 00:37:59.102 }, 00:37:59.102 "method": "bdev_nvme_attach_controller" 00:37:59.102 } 00:37:59.102 EOF 00:37:59.102 )") 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:59.102 "params": { 00:37:59.102 "name": "Nvme0", 00:37:59.102 "trtype": "tcp", 00:37:59.102 "traddr": "10.0.0.2", 00:37:59.102 "adrfam": "ipv4", 00:37:59.102 "trsvcid": "4420", 00:37:59.102 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:59.102 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:59.102 "hdgst": false, 00:37:59.102 "ddgst": false 00:37:59.102 }, 00:37:59.102 "method": "bdev_nvme_attach_controller" 00:37:59.102 },{ 00:37:59.102 "params": { 00:37:59.102 "name": "Nvme1", 00:37:59.102 "trtype": "tcp", 00:37:59.102 "traddr": "10.0.0.2", 00:37:59.102 "adrfam": "ipv4", 00:37:59.102 "trsvcid": "4420", 00:37:59.102 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:59.102 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:59.102 "hdgst": false, 00:37:59.102 "ddgst": false 00:37:59.102 }, 00:37:59.102 "method": "bdev_nvme_attach_controller" 00:37:59.102 }' 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:59.102 08:36:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:59.102 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:59.102 ... 00:37:59.102 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:59.102 ... 00:37:59.102 fio-3.35 00:37:59.102 Starting 4 threads 00:38:04.391 00:38:04.392 filename0: (groupid=0, jobs=1): err= 0: pid=2283232: Thu Nov 28 08:37:01 2024 00:38:04.392 read: IOPS=3059, BW=23.9MiB/s (25.1MB/s)(120MiB/5002msec) 00:38:04.392 slat (nsec): min=8066, max=61549, avg=8604.97, stdev=1665.06 00:38:04.392 clat (usec): min=897, max=4443, avg=2592.19, stdev=370.14 00:38:04.392 lat (usec): min=921, max=4451, avg=2600.79, stdev=369.88 00:38:04.392 clat percentiles (usec): 00:38:04.392 | 1.00th=[ 1663], 5.00th=[ 2024], 10.00th=[ 2147], 20.00th=[ 2278], 00:38:04.392 | 30.00th=[ 2442], 40.00th=[ 2540], 50.00th=[ 2704], 60.00th=[ 2704], 00:38:04.392 | 70.00th=[ 2737], 80.00th=[ 2737], 90.00th=[ 2933], 95.00th=[ 3392], 00:38:04.392 | 99.00th=[ 3687], 99.50th=[ 3752], 99.90th=[ 4080], 99.95th=[ 4080], 00:38:04.392 | 99.99th=[ 4424] 00:38:04.392 bw ( KiB/s): min=23584, max=26048, per=26.23%, avg=24503.11, stdev=706.36, samples=9 00:38:04.392 iops : min= 2948, max= 3256, avg=3062.89, stdev=88.30, samples=9 00:38:04.392 lat (usec) : 1000=0.01% 00:38:04.392 lat (msec) : 2=2.97%, 4=96.86%, 10=0.16% 00:38:04.392 cpu : usr=96.98%, sys=2.78%, ctx=17, majf=0, minf=24 00:38:04.392 IO depths : 1=0.1%, 2=1.0%, 4=69.1%, 8=29.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:04.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:04.392 complete : 0=0.0%, 4=94.5%, 8=5.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:04.392 issued rwts: total=15306,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:04.392 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:04.392 filename0: (groupid=0, jobs=1): err= 0: pid=2283233: Thu Nov 28 08:37:01 2024 00:38:04.392 read: IOPS=2870, BW=22.4MiB/s (23.5MB/s)(112MiB/5001msec) 00:38:04.392 slat (nsec): min=5533, max=94786, avg=6127.38, stdev=2057.10 00:38:04.392 clat (usec): min=1598, max=6365, avg=2770.46, stdev=255.72 00:38:04.392 lat (usec): min=1603, max=6398, avg=2776.59, stdev=255.82 00:38:04.392 clat percentiles (usec): 00:38:04.392 | 1.00th=[ 2212], 5.00th=[ 2474], 10.00th=[ 2573], 20.00th=[ 2671], 00:38:04.392 | 30.00th=[ 2704], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2737], 00:38:04.392 | 70.00th=[ 2769], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3130], 00:38:04.392 | 99.00th=[ 3916], 99.50th=[ 4080], 99.90th=[ 4490], 99.95th=[ 6259], 00:38:04.392 | 99.99th=[ 6325] 00:38:04.392 bw ( KiB/s): min=22544, max=23280, per=24.59%, avg=22968.56, stdev=239.29, samples=9 00:38:04.392 iops : min= 2818, max= 2910, avg=2871.00, stdev=29.98, samples=9 00:38:04.392 lat (msec) : 2=0.26%, 4=98.93%, 10=0.81% 00:38:04.392 cpu : usr=97.00%, sys=2.76%, ctx=7, majf=0, minf=63 00:38:04.392 IO depths : 1=0.1%, 2=0.1%, 4=72.0%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:04.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:04.392 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:04.392 issued rwts: total=14355,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:04.392 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:04.392 filename1: (groupid=0, jobs=1): err= 0: pid=2283234: Thu Nov 28 08:37:01 2024 00:38:04.392 read: IOPS=2892, BW=22.6MiB/s (23.7MB/s)(113MiB/5001msec) 00:38:04.392 slat (nsec): min=5531, max=61394, avg=6165.26, stdev=2034.21 00:38:04.392 clat (usec): min=1474, max=5185, avg=2750.74, stdev=233.97 00:38:04.392 lat (usec): min=1480, max=5215, avg=2756.91, stdev=234.04 00:38:04.392 clat percentiles (usec): 00:38:04.392 | 1.00th=[ 2147], 5.00th=[ 2442], 10.00th=[ 2540], 20.00th=[ 2671], 00:38:04.392 | 30.00th=[ 2704], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2737], 00:38:04.392 | 70.00th=[ 2769], 80.00th=[ 2900], 90.00th=[ 2966], 95.00th=[ 3032], 00:38:04.392 | 99.00th=[ 3621], 99.50th=[ 3916], 99.90th=[ 4293], 99.95th=[ 4883], 00:38:04.392 | 99.99th=[ 4883] 00:38:04.392 bw ( KiB/s): min=22736, max=23328, per=24.75%, avg=23120.00, stdev=200.16, samples=9 00:38:04.392 iops : min= 2842, max= 2916, avg=2890.00, stdev=25.02, samples=9 00:38:04.392 lat (msec) : 2=0.46%, 4=99.18%, 10=0.36% 00:38:04.392 cpu : usr=96.24%, sys=3.52%, ctx=13, majf=0, minf=63 00:38:04.392 IO depths : 1=0.1%, 2=0.1%, 4=69.1%, 8=30.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:04.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:04.392 complete : 0=0.0%, 4=94.9%, 8=5.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:04.392 issued rwts: total=14463,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:04.392 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:04.392 filename1: (groupid=0, jobs=1): err= 0: pid=2283235: Thu Nov 28 08:37:01 2024 00:38:04.392 read: IOPS=2854, BW=22.3MiB/s (23.4MB/s)(112MiB/5001msec) 00:38:04.392 slat (nsec): min=5531, max=88893, avg=6064.94, stdev=2005.26 00:38:04.392 clat (usec): min=1079, max=6189, avg=2785.44, stdev=262.80 00:38:04.392 lat (usec): min=1086, max=6222, avg=2791.50, stdev=262.93 00:38:04.392 clat percentiles (usec): 00:38:04.392 | 1.00th=[ 2278], 5.00th=[ 2540], 10.00th=[ 2573], 20.00th=[ 2704], 00:38:04.392 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2737], 60.00th=[ 2737], 00:38:04.392 | 70.00th=[ 2769], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3163], 00:38:04.392 | 99.00th=[ 4047], 99.50th=[ 4293], 99.90th=[ 4621], 99.95th=[ 5735], 00:38:04.392 | 99.99th=[ 5800] 00:38:04.392 bw ( KiB/s): min=22160, max=23344, per=24.43%, avg=22821.00, stdev=329.16, samples=9 00:38:04.392 iops : min= 2770, max= 2918, avg=2852.56, stdev=41.20, samples=9 00:38:04.392 lat (msec) : 2=0.20%, 4=98.77%, 10=1.02% 00:38:04.392 cpu : usr=96.90%, sys=2.88%, ctx=6, majf=0, minf=37 00:38:04.392 IO depths : 1=0.1%, 2=0.1%, 4=73.5%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:04.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:04.392 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:04.392 issued rwts: total=14277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:04.392 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:04.392 00:38:04.392 Run status group 0 (all jobs): 00:38:04.392 READ: bw=91.2MiB/s (95.6MB/s), 22.3MiB/s-23.9MiB/s (23.4MB/s-25.1MB/s), io=456MiB (478MB), run=5001-5002msec 00:38:04.392 08:37:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:38:04.392 08:37:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:04.392 08:37:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:04.392 08:37:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:04.392 08:37:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:04.392 08:37:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:04.392 08:37:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.392 08:37:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:04.392 08:37:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.392 08:37:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:04.392 08:37:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.392 08:37:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:04.392 08:37:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.392 08:37:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:04.392 08:37:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:04.392 08:37:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:04.392 08:37:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:04.392 08:37:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.392 08:37:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:04.392 08:37:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.392 08:37:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:04.392 08:37:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.392 08:37:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:04.392 08:37:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.392 00:38:04.392 real 0m24.649s 00:38:04.392 user 5m19.038s 00:38:04.392 sys 0m4.579s 00:38:04.392 08:37:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:04.392 08:37:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:04.392 ************************************ 00:38:04.392 END TEST fio_dif_rand_params 00:38:04.392 ************************************ 00:38:04.392 08:37:01 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:38:04.392 08:37:01 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:04.392 08:37:01 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:04.392 08:37:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:04.652 ************************************ 00:38:04.652 START TEST fio_dif_digest 00:38:04.652 ************************************ 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:04.652 bdev_null0 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:04.652 [2024-11-28 08:37:01.742440] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:04.652 { 00:38:04.652 "params": { 00:38:04.652 "name": "Nvme$subsystem", 00:38:04.652 "trtype": "$TEST_TRANSPORT", 00:38:04.652 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:04.652 "adrfam": "ipv4", 00:38:04.652 "trsvcid": "$NVMF_PORT", 00:38:04.652 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:04.652 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:04.652 "hdgst": ${hdgst:-false}, 00:38:04.652 "ddgst": ${ddgst:-false} 00:38:04.652 }, 00:38:04.652 "method": "bdev_nvme_attach_controller" 00:38:04.652 } 00:38:04.652 EOF 00:38:04.652 )") 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:04.652 "params": { 00:38:04.652 "name": "Nvme0", 00:38:04.652 "trtype": "tcp", 00:38:04.652 "traddr": "10.0.0.2", 00:38:04.652 "adrfam": "ipv4", 00:38:04.652 "trsvcid": "4420", 00:38:04.652 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:04.652 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:04.652 "hdgst": true, 00:38:04.652 "ddgst": true 00:38:04.652 }, 00:38:04.652 "method": "bdev_nvme_attach_controller" 00:38:04.652 }' 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:04.652 08:37:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:04.912 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:04.912 ... 00:38:04.912 fio-3.35 00:38:04.912 Starting 3 threads 00:38:17.138 00:38:17.138 filename0: (groupid=0, jobs=1): err= 0: pid=2284433: Thu Nov 28 08:37:12 2024 00:38:17.138 read: IOPS=287, BW=36.0MiB/s (37.7MB/s)(361MiB/10046msec) 00:38:17.138 slat (nsec): min=5970, max=32465, avg=7001.40, stdev=1324.11 00:38:17.138 clat (usec): min=7899, max=49743, avg=10404.24, stdev=1282.81 00:38:17.138 lat (usec): min=7907, max=49750, avg=10411.25, stdev=1282.84 00:38:17.138 clat percentiles (usec): 00:38:17.138 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9634], 00:38:17.138 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10290], 60.00th=[10552], 00:38:17.138 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11469], 95.00th=[11731], 00:38:17.138 | 99.00th=[12518], 99.50th=[12780], 99.90th=[13173], 99.95th=[46924], 00:38:17.138 | 99.99th=[49546] 00:38:17.138 bw ( KiB/s): min=35328, max=37888, per=33.18%, avg=36966.40, stdev=776.49, samples=20 00:38:17.138 iops : min= 276, max= 296, avg=288.80, stdev= 6.07, samples=20 00:38:17.138 lat (msec) : 10=32.66%, 20=67.27%, 50=0.07% 00:38:17.138 cpu : usr=94.82%, sys=4.95%, ctx=21, majf=0, minf=156 00:38:17.138 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:17.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:17.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:17.138 issued rwts: total=2890,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:17.138 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:17.138 filename0: (groupid=0, jobs=1): err= 0: pid=2284434: Thu Nov 28 08:37:12 2024 00:38:17.138 read: IOPS=302, BW=37.8MiB/s (39.7MB/s)(380MiB/10047msec) 00:38:17.138 slat (nsec): min=5758, max=32346, avg=6678.30, stdev=1244.26 00:38:17.138 clat (usec): min=6726, max=51193, avg=9892.42, stdev=1381.59 00:38:17.138 lat (usec): min=6734, max=51200, avg=9899.09, stdev=1381.56 00:38:17.138 clat percentiles (usec): 00:38:17.138 | 1.00th=[ 7701], 5.00th=[ 8291], 10.00th=[ 8717], 20.00th=[ 9110], 00:38:17.138 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10159], 00:38:17.138 | 70.00th=[10421], 80.00th=[10552], 90.00th=[11076], 95.00th=[11338], 00:38:17.138 | 99.00th=[11994], 99.50th=[12125], 99.90th=[12649], 99.95th=[49021], 00:38:17.138 | 99.99th=[51119] 00:38:17.138 bw ( KiB/s): min=37120, max=43008, per=34.90%, avg=38886.40, stdev=1954.76, samples=20 00:38:17.138 iops : min= 290, max= 336, avg=303.80, stdev=15.27, samples=20 00:38:17.138 lat (msec) : 10=54.14%, 20=45.79%, 50=0.03%, 100=0.03% 00:38:17.138 cpu : usr=94.55%, sys=5.22%, ctx=19, majf=0, minf=132 00:38:17.138 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:17.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:17.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:17.138 issued rwts: total=3040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:17.138 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:17.138 filename0: (groupid=0, jobs=1): err= 0: pid=2284435: Thu Nov 28 08:37:12 2024 00:38:17.138 read: IOPS=280, BW=35.0MiB/s (36.7MB/s)(352MiB/10045msec) 00:38:17.138 slat (nsec): min=5921, max=31931, avg=6707.94, stdev=1212.43 00:38:17.138 clat (usec): min=7834, max=49409, avg=10677.61, stdev=1293.67 00:38:17.138 lat (usec): min=7841, max=49416, avg=10684.32, stdev=1293.72 00:38:17.138 clat percentiles (usec): 00:38:17.138 | 1.00th=[ 8717], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[ 9896], 00:38:17.138 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10683], 60.00th=[10814], 00:38:17.138 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11731], 95.00th=[12125], 00:38:17.138 | 99.00th=[12780], 99.50th=[12911], 99.90th=[14353], 99.95th=[45351], 00:38:17.138 | 99.99th=[49546] 00:38:17.138 bw ( KiB/s): min=34560, max=37120, per=32.33%, avg=36019.20, stdev=627.62, samples=20 00:38:17.138 iops : min= 270, max= 290, avg=281.40, stdev= 4.90, samples=20 00:38:17.138 lat (msec) : 10=21.84%, 20=78.09%, 50=0.07% 00:38:17.138 cpu : usr=94.89%, sys=4.89%, ctx=17, majf=0, minf=100 00:38:17.138 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:17.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:17.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:17.138 issued rwts: total=2816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:17.138 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:17.138 00:38:17.138 Run status group 0 (all jobs): 00:38:17.138 READ: bw=109MiB/s (114MB/s), 35.0MiB/s-37.8MiB/s (36.7MB/s-39.7MB/s), io=1093MiB (1146MB), run=10045-10047msec 00:38:17.138 08:37:12 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:38:17.138 08:37:12 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:38:17.138 08:37:12 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:38:17.138 08:37:12 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:17.138 08:37:12 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:38:17.138 08:37:12 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:17.138 08:37:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.138 08:37:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:17.138 08:37:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.138 08:37:12 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:17.138 08:37:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:17.138 08:37:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:17.138 08:37:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:17.138 00:38:17.138 real 0m11.201s 00:38:17.138 user 0m40.118s 00:38:17.138 sys 0m1.818s 00:38:17.138 08:37:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:17.138 08:37:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:17.138 ************************************ 00:38:17.138 END TEST fio_dif_digest 00:38:17.138 ************************************ 00:38:17.138 08:37:12 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:38:17.138 08:37:12 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:38:17.138 08:37:12 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:17.138 08:37:12 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:38:17.138 08:37:12 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:17.138 08:37:12 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:38:17.138 08:37:12 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:17.138 08:37:12 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:17.138 rmmod nvme_tcp 00:38:17.138 rmmod nvme_fabrics 00:38:17.138 rmmod nvme_keyring 00:38:17.138 08:37:13 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:17.138 08:37:13 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:38:17.138 08:37:13 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:38:17.138 08:37:13 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2274219 ']' 00:38:17.138 08:37:13 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2274219 00:38:17.138 08:37:13 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2274219 ']' 00:38:17.138 08:37:13 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2274219 00:38:17.138 08:37:13 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:38:17.139 08:37:13 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:17.139 08:37:13 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2274219 00:38:17.139 08:37:13 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:17.139 08:37:13 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:17.139 08:37:13 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2274219' 00:38:17.139 killing process with pid 2274219 00:38:17.139 08:37:13 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2274219 00:38:17.139 08:37:13 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2274219 00:38:17.139 08:37:13 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:38:17.139 08:37:13 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:19.685 Waiting for block devices as requested 00:38:19.685 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:19.685 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:19.685 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:19.685 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:19.685 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:19.947 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:19.947 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:19.947 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:20.208 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:20.208 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:20.470 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:20.470 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:20.470 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:20.470 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:20.730 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:20.730 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:20.730 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:21.300 08:37:18 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:21.300 08:37:18 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:21.300 08:37:18 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:38:21.300 08:37:18 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:38:21.300 08:37:18 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:21.300 08:37:18 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:38:21.300 08:37:18 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:21.300 08:37:18 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:21.300 08:37:18 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:21.300 08:37:18 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:21.300 08:37:18 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:23.213 08:37:20 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:23.214 00:38:23.214 real 1m18.934s 00:38:23.214 user 7m56.597s 00:38:23.214 sys 0m22.335s 00:38:23.214 08:37:20 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:23.214 08:37:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:23.214 ************************************ 00:38:23.214 END TEST nvmf_dif 00:38:23.214 ************************************ 00:38:23.214 08:37:20 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:23.214 08:37:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:23.214 08:37:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:23.214 08:37:20 -- common/autotest_common.sh@10 -- # set +x 00:38:23.214 ************************************ 00:38:23.214 START TEST nvmf_abort_qd_sizes 00:38:23.214 ************************************ 00:38:23.214 08:37:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:23.475 * Looking for test storage... 00:38:23.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:23.475 08:37:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:23.475 08:37:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:38:23.475 08:37:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:23.475 08:37:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:23.475 08:37:20 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:23.475 08:37:20 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:23.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:23.476 --rc genhtml_branch_coverage=1 00:38:23.476 --rc genhtml_function_coverage=1 00:38:23.476 --rc genhtml_legend=1 00:38:23.476 --rc geninfo_all_blocks=1 00:38:23.476 --rc geninfo_unexecuted_blocks=1 00:38:23.476 00:38:23.476 ' 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:23.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:23.476 --rc genhtml_branch_coverage=1 00:38:23.476 --rc genhtml_function_coverage=1 00:38:23.476 --rc genhtml_legend=1 00:38:23.476 --rc geninfo_all_blocks=1 00:38:23.476 --rc geninfo_unexecuted_blocks=1 00:38:23.476 00:38:23.476 ' 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:23.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:23.476 --rc genhtml_branch_coverage=1 00:38:23.476 --rc genhtml_function_coverage=1 00:38:23.476 --rc genhtml_legend=1 00:38:23.476 --rc geninfo_all_blocks=1 00:38:23.476 --rc geninfo_unexecuted_blocks=1 00:38:23.476 00:38:23.476 ' 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:23.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:23.476 --rc genhtml_branch_coverage=1 00:38:23.476 --rc genhtml_function_coverage=1 00:38:23.476 --rc genhtml_legend=1 00:38:23.476 --rc geninfo_all_blocks=1 00:38:23.476 --rc geninfo_unexecuted_blocks=1 00:38:23.476 00:38:23.476 ' 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:23.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:38:23.476 08:37:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:31.619 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:31.619 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:31.619 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:31.619 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:31.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:31.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:38:31.619 00:38:31.619 --- 10.0.0.2 ping statistics --- 00:38:31.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:31.619 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:31.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:31.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:38:31.619 00:38:31.619 --- 10.0.0.1 ping statistics --- 00:38:31.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:31.619 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:38:31.619 08:37:27 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:34.167 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:34.167 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:34.167 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:34.167 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:34.167 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:34.167 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:34.167 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:34.167 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:34.167 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:34.167 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:34.428 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:34.428 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:34.428 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:34.428 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:34.428 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:34.428 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:34.428 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:38:34.689 08:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:34.689 08:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:34.689 08:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:34.689 08:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:34.689 08:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:34.689 08:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:34.689 08:37:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:38:34.689 08:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:34.689 08:37:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:34.689 08:37:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:34.689 08:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2293866 00:38:34.689 08:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2293866 00:38:34.689 08:37:31 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:38:34.689 08:37:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2293866 ']' 00:38:34.689 08:37:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:34.689 08:37:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:34.689 08:37:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:34.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:34.689 08:37:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:34.689 08:37:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:34.951 [2024-11-28 08:37:32.012171] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:38:34.951 [2024-11-28 08:37:32.012222] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:34.951 [2024-11-28 08:37:32.106832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:34.951 [2024-11-28 08:37:32.144242] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:34.951 [2024-11-28 08:37:32.144276] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:34.951 [2024-11-28 08:37:32.144284] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:34.951 [2024-11-28 08:37:32.144290] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:34.951 [2024-11-28 08:37:32.144297] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:34.951 [2024-11-28 08:37:32.146049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:34.951 [2024-11-28 08:37:32.146214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:34.951 [2024-11-28 08:37:32.146260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:34.951 [2024-11-28 08:37:32.146260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:35.524 08:37:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:35.524 08:37:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:38:35.524 08:37:32 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:35.524 08:37:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:35.524 08:37:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:35.786 08:37:32 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:35.786 08:37:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:38:35.786 08:37:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:38:35.786 08:37:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:38:35.786 08:37:32 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:38:35.786 08:37:32 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:38:35.786 08:37:32 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:38:35.786 08:37:32 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:38:35.786 08:37:32 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:38:35.786 08:37:32 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:38:35.786 08:37:32 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:38:35.786 08:37:32 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:38:35.786 08:37:32 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:38:35.786 08:37:32 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:38:35.786 08:37:32 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:38:35.786 08:37:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:38:35.786 08:37:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:38:35.786 08:37:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:38:35.786 08:37:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:35.786 08:37:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:35.786 08:37:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:35.786 ************************************ 00:38:35.786 START TEST spdk_target_abort 00:38:35.786 ************************************ 00:38:35.786 08:37:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:38:35.786 08:37:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:38:35.786 08:37:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:38:35.786 08:37:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.786 08:37:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:36.048 spdk_targetn1 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:36.048 [2024-11-28 08:37:33.216067] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:36.048 [2024-11-28 08:37:33.268390] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:36.048 08:37:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:36.310 [2024-11-28 08:37:33.538768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:224 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:38:36.310 [2024-11-28 08:37:33.538818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:001d p:1 m:0 dnr:0 00:38:36.310 [2024-11-28 08:37:33.566768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1152 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:38:36.310 [2024-11-28 08:37:33.566802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0091 p:1 m:0 dnr:0 00:38:36.310 [2024-11-28 08:37:33.586670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:1608 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:38:36.310 [2024-11-28 08:37:33.586704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00cc p:1 m:0 dnr:0 00:38:36.310 [2024-11-28 08:37:33.587794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:1672 len:8 PRP1 0x200004abe000 PRP2 0x0 00:38:36.310 [2024-11-28 08:37:33.587816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00d3 p:1 m:0 dnr:0 00:38:36.572 [2024-11-28 08:37:33.606692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2344 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:38:36.572 [2024-11-28 08:37:33.606726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:38:36.572 [2024-11-28 08:37:33.622588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2832 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:38:36.572 [2024-11-28 08:37:33.622619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:38:36.572 [2024-11-28 08:37:33.629754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:3032 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:38:36.572 [2024-11-28 08:37:33.629783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:38:36.572 [2024-11-28 08:37:33.645700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:3512 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:38:36.572 [2024-11-28 08:37:33.645731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00b9 p:0 m:0 dnr:0 00:38:39.881 Initializing NVMe Controllers 00:38:39.881 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:39.881 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:39.881 Initialization complete. Launching workers. 00:38:39.881 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11085, failed: 8 00:38:39.881 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2404, failed to submit 8689 00:38:39.881 success 726, unsuccessful 1678, failed 0 00:38:39.881 08:37:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:39.881 08:37:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:39.881 [2024-11-28 08:37:36.835494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:432 len:8 PRP1 0x200004e4a000 PRP2 0x0 00:38:39.881 [2024-11-28 08:37:36.835532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:0048 p:1 m:0 dnr:0 00:38:39.881 [2024-11-28 08:37:36.866373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:180 nsid:1 lba:1208 len:8 PRP1 0x200004e52000 PRP2 0x0 00:38:39.881 [2024-11-28 08:37:36.866398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:180 cdw0:0 sqhd:009c p:1 m:0 dnr:0 00:38:39.881 [2024-11-28 08:37:36.873182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:188 nsid:1 lba:1384 len:8 PRP1 0x200004e4c000 PRP2 0x0 00:38:39.881 [2024-11-28 08:37:36.873203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:188 cdw0:0 sqhd:00b9 p:1 m:0 dnr:0 00:38:39.881 [2024-11-28 08:37:36.900340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:2072 len:8 PRP1 0x200004e42000 PRP2 0x0 00:38:39.881 [2024-11-28 08:37:36.900362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:38:39.881 [2024-11-28 08:37:36.916211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:190 nsid:1 lba:2552 len:8 PRP1 0x200004e48000 PRP2 0x0 00:38:39.881 [2024-11-28 08:37:36.916233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:190 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:38:39.881 [2024-11-28 08:37:36.964310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:3464 len:8 PRP1 0x200004e5a000 PRP2 0x0 00:38:39.881 [2024-11-28 08:37:36.964334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:00c0 p:0 m:0 dnr:0 00:38:43.353 Initializing NVMe Controllers 00:38:43.353 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:43.353 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:43.353 Initialization complete. Launching workers. 00:38:43.353 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8711, failed: 6 00:38:43.353 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1219, failed to submit 7498 00:38:43.353 success 346, unsuccessful 873, failed 0 00:38:43.353 08:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:43.353 08:37:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:45.894 Initializing NVMe Controllers 00:38:45.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:45.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:45.894 Initialization complete. Launching workers. 00:38:45.894 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43576, failed: 0 00:38:45.894 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2694, failed to submit 40882 00:38:45.894 success 606, unsuccessful 2088, failed 0 00:38:45.894 08:37:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:38:45.894 08:37:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.894 08:37:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:45.894 08:37:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:45.894 08:37:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:38:45.894 08:37:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:45.894 08:37:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:47.804 08:37:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:47.804 08:37:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2293866 00:38:47.804 08:37:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2293866 ']' 00:38:47.804 08:37:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2293866 00:38:47.804 08:37:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:38:47.804 08:37:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:47.804 08:37:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2293866 00:38:47.804 08:37:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:47.804 08:37:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:47.804 08:37:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2293866' 00:38:47.804 killing process with pid 2293866 00:38:47.804 08:37:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2293866 00:38:47.804 08:37:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2293866 00:38:48.069 00:38:48.069 real 0m12.280s 00:38:48.069 user 0m50.040s 00:38:48.069 sys 0m1.988s 00:38:48.069 08:37:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:48.069 08:37:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:48.069 ************************************ 00:38:48.069 END TEST spdk_target_abort 00:38:48.069 ************************************ 00:38:48.069 08:37:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:38:48.069 08:37:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:48.069 08:37:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:48.069 08:37:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:48.069 ************************************ 00:38:48.069 START TEST kernel_target_abort 00:38:48.069 ************************************ 00:38:48.069 08:37:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:38:48.069 08:37:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:38:48.069 08:37:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:38:48.069 08:37:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:48.069 08:37:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:48.069 08:37:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:48.069 08:37:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:48.069 08:37:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:48.069 08:37:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:48.069 08:37:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:48.069 08:37:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:48.069 08:37:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:48.069 08:37:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:38:48.069 08:37:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:38:48.069 08:37:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:38:48.069 08:37:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:48.069 08:37:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:48.069 08:37:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:48.069 08:37:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:38:48.069 08:37:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:38:48.069 08:37:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:38:48.069 08:37:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:48.069 08:37:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:51.368 Waiting for block devices as requested 00:38:51.368 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:51.628 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:51.628 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:51.628 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:51.889 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:51.889 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:51.889 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:52.149 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:52.149 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:52.409 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:52.409 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:52.409 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:52.669 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:52.669 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:52.669 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:52.931 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:52.931 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:53.191 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:38:53.191 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:53.191 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:38:53.191 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:38:53.191 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:53.191 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:38:53.191 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:38:53.191 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:38:53.191 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:38:53.191 No valid GPT data, bailing 00:38:53.191 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:53.191 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:38:53.191 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:38:53.191 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:38:53.191 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:38:53.191 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:53.191 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:53.191 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:53.191 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:38:53.192 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:38:53.192 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:38:53.192 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:38:53.192 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:38:53.192 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:38:53.192 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:38:53.192 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:38:53.192 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:53.453 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:38:53.453 00:38:53.453 Discovery Log Number of Records 2, Generation counter 2 00:38:53.453 =====Discovery Log Entry 0====== 00:38:53.453 trtype: tcp 00:38:53.453 adrfam: ipv4 00:38:53.453 subtype: current discovery subsystem 00:38:53.453 treq: not specified, sq flow control disable supported 00:38:53.453 portid: 1 00:38:53.453 trsvcid: 4420 00:38:53.453 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:53.453 traddr: 10.0.0.1 00:38:53.453 eflags: none 00:38:53.453 sectype: none 00:38:53.453 =====Discovery Log Entry 1====== 00:38:53.453 trtype: tcp 00:38:53.453 adrfam: ipv4 00:38:53.453 subtype: nvme subsystem 00:38:53.453 treq: not specified, sq flow control disable supported 00:38:53.453 portid: 1 00:38:53.453 trsvcid: 4420 00:38:53.453 subnqn: nqn.2016-06.io.spdk:testnqn 00:38:53.453 traddr: 10.0.0.1 00:38:53.453 eflags: none 00:38:53.453 sectype: none 00:38:53.453 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:38:53.453 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:53.453 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:53.453 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:38:53.453 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:53.453 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:53.453 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:53.453 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:53.453 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:53.453 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:53.453 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:53.453 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:53.453 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:53.453 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:53.453 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:38:53.453 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:53.453 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:38:53.453 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:53.453 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:53.453 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:53.453 08:37:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:56.755 Initializing NVMe Controllers 00:38:56.755 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:56.755 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:56.755 Initialization complete. Launching workers. 00:38:56.755 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67908, failed: 0 00:38:56.755 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67908, failed to submit 0 00:38:56.755 success 0, unsuccessful 67908, failed 0 00:38:56.755 08:37:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:56.755 08:37:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:00.053 Initializing NVMe Controllers 00:39:00.053 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:00.053 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:00.053 Initialization complete. Launching workers. 00:39:00.053 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 119537, failed: 0 00:39:00.053 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30090, failed to submit 89447 00:39:00.053 success 0, unsuccessful 30090, failed 0 00:39:00.053 08:37:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:00.053 08:37:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:03.354 Initializing NVMe Controllers 00:39:03.354 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:03.354 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:03.354 Initialization complete. Launching workers. 00:39:03.354 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146385, failed: 0 00:39:03.354 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36642, failed to submit 109743 00:39:03.354 success 0, unsuccessful 36642, failed 0 00:39:03.354 08:37:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:39:03.354 08:37:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:39:03.354 08:37:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:39:03.354 08:37:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:03.354 08:37:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:03.354 08:37:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:39:03.354 08:37:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:03.355 08:37:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:39:03.355 08:37:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:39:03.355 08:37:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:06.656 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:06.656 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:06.656 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:06.656 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:06.656 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:06.656 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:06.656 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:06.657 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:06.657 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:06.657 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:06.657 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:06.657 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:06.657 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:06.657 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:06.657 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:06.657 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:08.572 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:39:08.572 00:39:08.572 real 0m20.462s 00:39:08.572 user 0m10.066s 00:39:08.572 sys 0m6.042s 00:39:08.572 08:38:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:08.572 08:38:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:08.572 ************************************ 00:39:08.572 END TEST kernel_target_abort 00:39:08.572 ************************************ 00:39:08.572 08:38:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:39:08.572 08:38:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:39:08.572 08:38:05 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:08.572 08:38:05 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:39:08.572 08:38:05 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:08.572 08:38:05 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:39:08.572 08:38:05 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:08.572 08:38:05 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:08.572 rmmod nvme_tcp 00:39:08.572 rmmod nvme_fabrics 00:39:08.572 rmmod nvme_keyring 00:39:08.572 08:38:05 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:08.572 08:38:05 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:39:08.573 08:38:05 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:39:08.573 08:38:05 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2293866 ']' 00:39:08.573 08:38:05 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2293866 00:39:08.573 08:38:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2293866 ']' 00:39:08.573 08:38:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2293866 00:39:08.573 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2293866) - No such process 00:39:08.573 08:38:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2293866 is not found' 00:39:08.573 Process with pid 2293866 is not found 00:39:08.573 08:38:05 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:39:08.573 08:38:05 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:12.781 Waiting for block devices as requested 00:39:12.781 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:12.781 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:12.781 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:12.781 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:12.781 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:12.781 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:12.781 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:12.781 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:12.781 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:39:13.042 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:13.042 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:13.042 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:13.303 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:13.303 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:13.303 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:13.564 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:13.564 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:13.825 08:38:11 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:13.825 08:38:11 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:13.825 08:38:11 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:39:13.825 08:38:11 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:39:13.825 08:38:11 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:13.825 08:38:11 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:39:13.825 08:38:11 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:13.825 08:38:11 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:13.825 08:38:11 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:13.825 08:38:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:13.825 08:38:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:16.393 08:38:13 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:16.393 00:39:16.393 real 0m52.633s 00:39:16.393 user 1m5.634s 00:39:16.393 sys 0m19.025s 00:39:16.393 08:38:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:16.393 08:38:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:16.393 ************************************ 00:39:16.393 END TEST nvmf_abort_qd_sizes 00:39:16.393 ************************************ 00:39:16.393 08:38:13 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:16.393 08:38:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:16.393 08:38:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:16.393 08:38:13 -- common/autotest_common.sh@10 -- # set +x 00:39:16.393 ************************************ 00:39:16.393 START TEST keyring_file 00:39:16.393 ************************************ 00:39:16.393 08:38:13 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:16.393 * Looking for test storage... 00:39:16.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:16.393 08:38:13 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:16.393 08:38:13 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:39:16.393 08:38:13 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:16.394 08:38:13 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:16.394 08:38:13 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:16.394 08:38:13 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:16.394 08:38:13 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:16.394 08:38:13 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:39:16.394 08:38:13 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:39:16.394 08:38:13 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:39:16.394 08:38:13 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:39:16.394 08:38:13 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:39:16.394 08:38:13 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:39:16.394 08:38:13 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:39:16.394 08:38:13 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:16.394 08:38:13 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:39:16.394 08:38:13 keyring_file -- scripts/common.sh@345 -- # : 1 00:39:16.394 08:38:13 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:16.394 08:38:13 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:16.394 08:38:13 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:39:16.394 08:38:13 keyring_file -- scripts/common.sh@353 -- # local d=1 00:39:16.394 08:38:13 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:16.394 08:38:13 keyring_file -- scripts/common.sh@355 -- # echo 1 00:39:16.394 08:38:13 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:39:16.394 08:38:13 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:39:16.394 08:38:13 keyring_file -- scripts/common.sh@353 -- # local d=2 00:39:16.394 08:38:13 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:16.394 08:38:13 keyring_file -- scripts/common.sh@355 -- # echo 2 00:39:16.394 08:38:13 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:39:16.394 08:38:13 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:16.394 08:38:13 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:16.394 08:38:13 keyring_file -- scripts/common.sh@368 -- # return 0 00:39:16.394 08:38:13 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:16.394 08:38:13 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:16.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.394 --rc genhtml_branch_coverage=1 00:39:16.394 --rc genhtml_function_coverage=1 00:39:16.394 --rc genhtml_legend=1 00:39:16.394 --rc geninfo_all_blocks=1 00:39:16.394 --rc geninfo_unexecuted_blocks=1 00:39:16.394 00:39:16.394 ' 00:39:16.394 08:38:13 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:16.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.394 --rc genhtml_branch_coverage=1 00:39:16.394 --rc genhtml_function_coverage=1 00:39:16.394 --rc genhtml_legend=1 00:39:16.394 --rc geninfo_all_blocks=1 00:39:16.394 --rc geninfo_unexecuted_blocks=1 00:39:16.394 00:39:16.394 ' 00:39:16.394 08:38:13 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:16.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.394 --rc genhtml_branch_coverage=1 00:39:16.394 --rc genhtml_function_coverage=1 00:39:16.394 --rc genhtml_legend=1 00:39:16.394 --rc geninfo_all_blocks=1 00:39:16.394 --rc geninfo_unexecuted_blocks=1 00:39:16.394 00:39:16.394 ' 00:39:16.394 08:38:13 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:16.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.394 --rc genhtml_branch_coverage=1 00:39:16.394 --rc genhtml_function_coverage=1 00:39:16.394 --rc genhtml_legend=1 00:39:16.394 --rc geninfo_all_blocks=1 00:39:16.394 --rc geninfo_unexecuted_blocks=1 00:39:16.394 00:39:16.394 ' 00:39:16.394 08:38:13 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:16.394 08:38:13 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:16.394 08:38:13 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:39:16.394 08:38:13 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:16.394 08:38:13 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:16.394 08:38:13 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:16.394 08:38:13 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:16.394 08:38:13 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:16.394 08:38:13 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:16.394 08:38:13 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:16.394 08:38:13 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:16.394 08:38:13 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:16.394 08:38:13 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:16.394 08:38:13 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:16.394 08:38:13 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:16.394 08:38:13 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:16.394 08:38:13 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:16.394 08:38:13 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:16.394 08:38:13 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:16.394 08:38:13 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:16.394 08:38:13 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:39:16.394 08:38:13 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:16.394 08:38:13 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:16.394 08:38:13 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:16.394 08:38:13 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.394 08:38:13 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.394 08:38:13 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.394 08:38:13 keyring_file -- paths/export.sh@5 -- # export PATH 00:39:16.394 08:38:13 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.394 08:38:13 keyring_file -- nvmf/common.sh@51 -- # : 0 00:39:16.394 08:38:13 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:16.394 08:38:13 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:16.394 08:38:13 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:16.394 08:38:13 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:16.394 08:38:13 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:16.394 08:38:13 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:16.394 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:16.394 08:38:13 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:16.394 08:38:13 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:16.394 08:38:13 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:16.394 08:38:13 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:16.394 08:38:13 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:16.394 08:38:13 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:16.394 08:38:13 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:39:16.394 08:38:13 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:39:16.394 08:38:13 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:39:16.394 08:38:13 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:16.394 08:38:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:16.394 08:38:13 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:16.394 08:38:13 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:16.394 08:38:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:16.394 08:38:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:16.394 08:38:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.qvEtgwOz3K 00:39:16.394 08:38:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:16.394 08:38:13 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:16.394 08:38:13 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:16.394 08:38:13 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:16.394 08:38:13 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:16.394 08:38:13 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:16.394 08:38:13 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:16.394 08:38:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.qvEtgwOz3K 00:39:16.394 08:38:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.qvEtgwOz3K 00:39:16.394 08:38:13 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.qvEtgwOz3K 00:39:16.394 08:38:13 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:39:16.394 08:38:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:16.394 08:38:13 keyring_file -- keyring/common.sh@17 -- # name=key1 00:39:16.394 08:38:13 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:16.394 08:38:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:16.395 08:38:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:16.395 08:38:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.lApxnzcFWQ 00:39:16.395 08:38:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:16.395 08:38:13 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:16.395 08:38:13 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:16.395 08:38:13 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:16.395 08:38:13 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:39:16.395 08:38:13 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:16.395 08:38:13 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:16.395 08:38:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.lApxnzcFWQ 00:39:16.395 08:38:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.lApxnzcFWQ 00:39:16.395 08:38:13 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.lApxnzcFWQ 00:39:16.395 08:38:13 keyring_file -- keyring/file.sh@30 -- # tgtpid=2304339 00:39:16.395 08:38:13 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2304339 00:39:16.395 08:38:13 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:16.395 08:38:13 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2304339 ']' 00:39:16.395 08:38:13 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:16.395 08:38:13 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:16.395 08:38:13 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:16.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:16.395 08:38:13 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:16.395 08:38:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:16.395 [2024-11-28 08:38:13.614341] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:39:16.395 [2024-11-28 08:38:13.614430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2304339 ] 00:39:16.655 [2024-11-28 08:38:13.709103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:16.655 [2024-11-28 08:38:13.761843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:17.226 08:38:14 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:17.226 08:38:14 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:17.226 08:38:14 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:39:17.226 08:38:14 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:17.226 08:38:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:17.226 [2024-11-28 08:38:14.457273] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:17.226 null0 00:39:17.226 [2024-11-28 08:38:14.489314] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:17.226 [2024-11-28 08:38:14.489888] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:17.226 08:38:14 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:17.226 08:38:14 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:17.226 08:38:14 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:17.226 08:38:14 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:17.226 08:38:14 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:39:17.489 08:38:14 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:17.489 08:38:14 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:39:17.489 08:38:14 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:17.489 08:38:14 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:17.489 08:38:14 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:17.489 08:38:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:17.489 [2024-11-28 08:38:14.521390] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:39:17.489 request: 00:39:17.489 { 00:39:17.489 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:39:17.489 "secure_channel": false, 00:39:17.489 "listen_address": { 00:39:17.489 "trtype": "tcp", 00:39:17.489 "traddr": "127.0.0.1", 00:39:17.489 "trsvcid": "4420" 00:39:17.489 }, 00:39:17.489 "method": "nvmf_subsystem_add_listener", 00:39:17.489 "req_id": 1 00:39:17.489 } 00:39:17.489 Got JSON-RPC error response 00:39:17.489 response: 00:39:17.489 { 00:39:17.489 "code": -32602, 00:39:17.489 "message": "Invalid parameters" 00:39:17.489 } 00:39:17.489 08:38:14 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:39:17.489 08:38:14 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:17.489 08:38:14 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:17.489 08:38:14 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:17.489 08:38:14 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:17.489 08:38:14 keyring_file -- keyring/file.sh@47 -- # bperfpid=2304416 00:39:17.489 08:38:14 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2304416 /var/tmp/bperf.sock 00:39:17.489 08:38:14 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2304416 ']' 00:39:17.489 08:38:14 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:39:17.489 08:38:14 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:17.489 08:38:14 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:17.489 08:38:14 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:17.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:17.489 08:38:14 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:17.489 08:38:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:17.489 [2024-11-28 08:38:14.582070] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:39:17.489 [2024-11-28 08:38:14.582135] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2304416 ] 00:39:17.489 [2024-11-28 08:38:14.662717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:17.489 [2024-11-28 08:38:14.715083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:18.433 08:38:15 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:18.433 08:38:15 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:18.433 08:38:15 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qvEtgwOz3K 00:39:18.433 08:38:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qvEtgwOz3K 00:39:18.433 08:38:15 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.lApxnzcFWQ 00:39:18.433 08:38:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.lApxnzcFWQ 00:39:18.695 08:38:15 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:39:18.695 08:38:15 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:39:18.695 08:38:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:18.695 08:38:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:18.695 08:38:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:18.963 08:38:15 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.qvEtgwOz3K == \/\t\m\p\/\t\m\p\.\q\v\E\t\g\w\O\z\3\K ]] 00:39:18.963 08:38:16 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:39:18.963 08:38:16 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:39:18.963 08:38:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:18.963 08:38:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:18.963 08:38:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:18.963 08:38:16 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.lApxnzcFWQ == \/\t\m\p\/\t\m\p\.\l\A\p\x\n\z\c\F\W\Q ]] 00:39:18.963 08:38:16 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:39:18.963 08:38:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:18.963 08:38:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:18.963 08:38:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:18.963 08:38:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:18.963 08:38:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:19.340 08:38:16 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:39:19.340 08:38:16 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:39:19.340 08:38:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:19.340 08:38:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:19.340 08:38:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:19.340 08:38:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:19.340 08:38:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:19.601 08:38:16 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:39:19.601 08:38:16 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:19.601 08:38:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:19.601 [2024-11-28 08:38:16.795252] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:19.601 nvme0n1 00:39:19.862 08:38:16 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:39:19.862 08:38:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:19.862 08:38:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:19.862 08:38:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:19.862 08:38:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:19.862 08:38:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:19.862 08:38:17 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:39:19.862 08:38:17 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:39:19.862 08:38:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:19.862 08:38:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:19.862 08:38:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:19.862 08:38:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:19.862 08:38:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:20.124 08:38:17 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:39:20.124 08:38:17 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:20.124 Running I/O for 1 seconds... 00:39:21.069 20534.00 IOPS, 80.21 MiB/s 00:39:21.069 Latency(us) 00:39:21.069 [2024-11-28T07:38:18.358Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:21.069 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:39:21.069 nvme0n1 : 1.00 20588.07 80.42 0.00 0.00 6207.58 3577.17 10704.21 00:39:21.069 [2024-11-28T07:38:18.358Z] =================================================================================================================== 00:39:21.069 [2024-11-28T07:38:18.358Z] Total : 20588.07 80.42 0.00 0.00 6207.58 3577.17 10704.21 00:39:21.069 { 00:39:21.069 "results": [ 00:39:21.069 { 00:39:21.069 "job": "nvme0n1", 00:39:21.069 "core_mask": "0x2", 00:39:21.069 "workload": "randrw", 00:39:21.069 "percentage": 50, 00:39:21.069 "status": "finished", 00:39:21.069 "queue_depth": 128, 00:39:21.069 "io_size": 4096, 00:39:21.069 "runtime": 1.003591, 00:39:21.069 "iops": 20588.06824692529, 00:39:21.069 "mibps": 80.42214158955191, 00:39:21.069 "io_failed": 0, 00:39:21.069 "io_timeout": 0, 00:39:21.069 "avg_latency_us": 6207.578330590779, 00:39:21.069 "min_latency_us": 3577.173333333333, 00:39:21.069 "max_latency_us": 10704.213333333333 00:39:21.069 } 00:39:21.069 ], 00:39:21.069 "core_count": 1 00:39:21.069 } 00:39:21.069 08:38:18 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:21.069 08:38:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:21.331 08:38:18 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:39:21.331 08:38:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:21.331 08:38:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:21.331 08:38:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:21.331 08:38:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:21.331 08:38:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:21.593 08:38:18 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:39:21.593 08:38:18 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:39:21.593 08:38:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:21.593 08:38:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:21.593 08:38:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:21.593 08:38:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:21.593 08:38:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:21.855 08:38:18 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:39:21.855 08:38:18 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:21.855 08:38:18 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:21.855 08:38:18 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:21.855 08:38:18 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:21.855 08:38:18 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:21.855 08:38:18 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:21.855 08:38:18 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:21.855 08:38:18 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:21.855 08:38:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:21.855 [2024-11-28 08:38:19.082215] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:21.855 [2024-11-28 08:38:19.082512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c31c50 (107): Transport endpoint is not connected 00:39:21.855 [2024-11-28 08:38:19.083508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c31c50 (9): Bad file descriptor 00:39:21.855 [2024-11-28 08:38:19.084509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:39:21.855 [2024-11-28 08:38:19.084517] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:21.855 [2024-11-28 08:38:19.084523] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:39:21.855 [2024-11-28 08:38:19.084529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:39:21.855 request: 00:39:21.855 { 00:39:21.855 "name": "nvme0", 00:39:21.856 "trtype": "tcp", 00:39:21.856 "traddr": "127.0.0.1", 00:39:21.856 "adrfam": "ipv4", 00:39:21.856 "trsvcid": "4420", 00:39:21.856 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:21.856 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:21.856 "prchk_reftag": false, 00:39:21.856 "prchk_guard": false, 00:39:21.856 "hdgst": false, 00:39:21.856 "ddgst": false, 00:39:21.856 "psk": "key1", 00:39:21.856 "allow_unrecognized_csi": false, 00:39:21.856 "method": "bdev_nvme_attach_controller", 00:39:21.856 "req_id": 1 00:39:21.856 } 00:39:21.856 Got JSON-RPC error response 00:39:21.856 response: 00:39:21.856 { 00:39:21.856 "code": -5, 00:39:21.856 "message": "Input/output error" 00:39:21.856 } 00:39:21.856 08:38:19 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:21.856 08:38:19 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:21.856 08:38:19 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:21.856 08:38:19 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:21.856 08:38:19 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:39:21.856 08:38:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:21.856 08:38:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:21.856 08:38:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:21.856 08:38:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:21.856 08:38:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:22.117 08:38:19 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:39:22.117 08:38:19 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:39:22.117 08:38:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:22.117 08:38:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:22.117 08:38:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:22.117 08:38:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:22.117 08:38:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:22.379 08:38:19 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:39:22.379 08:38:19 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:39:22.379 08:38:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:22.379 08:38:19 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:39:22.379 08:38:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:39:22.640 08:38:19 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:39:22.640 08:38:19 keyring_file -- keyring/file.sh@78 -- # jq length 00:39:22.640 08:38:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:22.902 08:38:19 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:39:22.902 08:38:19 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.qvEtgwOz3K 00:39:22.902 08:38:19 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.qvEtgwOz3K 00:39:22.902 08:38:19 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:22.902 08:38:19 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.qvEtgwOz3K 00:39:22.902 08:38:19 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:22.902 08:38:19 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:22.902 08:38:19 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:22.902 08:38:19 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:22.902 08:38:19 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qvEtgwOz3K 00:39:22.902 08:38:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qvEtgwOz3K 00:39:22.902 [2024-11-28 08:38:20.112170] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.qvEtgwOz3K': 0100660 00:39:22.902 [2024-11-28 08:38:20.112193] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:39:22.902 request: 00:39:22.902 { 00:39:22.902 "name": "key0", 00:39:22.902 "path": "/tmp/tmp.qvEtgwOz3K", 00:39:22.902 "method": "keyring_file_add_key", 00:39:22.902 "req_id": 1 00:39:22.902 } 00:39:22.902 Got JSON-RPC error response 00:39:22.902 response: 00:39:22.902 { 00:39:22.902 "code": -1, 00:39:22.902 "message": "Operation not permitted" 00:39:22.902 } 00:39:22.902 08:38:20 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:22.902 08:38:20 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:22.902 08:38:20 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:22.902 08:38:20 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:22.902 08:38:20 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.qvEtgwOz3K 00:39:22.902 08:38:20 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qvEtgwOz3K 00:39:22.902 08:38:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qvEtgwOz3K 00:39:23.164 08:38:20 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.qvEtgwOz3K 00:39:23.164 08:38:20 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:39:23.164 08:38:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:23.164 08:38:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:23.164 08:38:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:23.164 08:38:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:23.164 08:38:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:23.425 08:38:20 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:39:23.425 08:38:20 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:23.425 08:38:20 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:23.425 08:38:20 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:23.425 08:38:20 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:23.425 08:38:20 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:23.425 08:38:20 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:23.425 08:38:20 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:23.425 08:38:20 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:23.425 08:38:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:23.425 [2024-11-28 08:38:20.673591] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.qvEtgwOz3K': No such file or directory 00:39:23.425 [2024-11-28 08:38:20.673607] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:39:23.425 [2024-11-28 08:38:20.673620] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:39:23.425 [2024-11-28 08:38:20.673626] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:39:23.425 [2024-11-28 08:38:20.673632] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:23.425 [2024-11-28 08:38:20.673637] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:39:23.425 request: 00:39:23.425 { 00:39:23.425 "name": "nvme0", 00:39:23.425 "trtype": "tcp", 00:39:23.425 "traddr": "127.0.0.1", 00:39:23.425 "adrfam": "ipv4", 00:39:23.425 "trsvcid": "4420", 00:39:23.425 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:23.425 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:23.425 "prchk_reftag": false, 00:39:23.425 "prchk_guard": false, 00:39:23.425 "hdgst": false, 00:39:23.425 "ddgst": false, 00:39:23.425 "psk": "key0", 00:39:23.425 "allow_unrecognized_csi": false, 00:39:23.425 "method": "bdev_nvme_attach_controller", 00:39:23.425 "req_id": 1 00:39:23.425 } 00:39:23.425 Got JSON-RPC error response 00:39:23.425 response: 00:39:23.425 { 00:39:23.425 "code": -19, 00:39:23.425 "message": "No such device" 00:39:23.425 } 00:39:23.425 08:38:20 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:23.425 08:38:20 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:23.425 08:38:20 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:23.426 08:38:20 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:23.426 08:38:20 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:39:23.426 08:38:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:23.688 08:38:20 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:23.688 08:38:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:23.688 08:38:20 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:23.688 08:38:20 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:23.688 08:38:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:23.688 08:38:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:23.688 08:38:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.rDkBPB2nZm 00:39:23.688 08:38:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:23.688 08:38:20 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:23.688 08:38:20 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:23.688 08:38:20 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:23.688 08:38:20 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:23.688 08:38:20 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:23.688 08:38:20 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:23.688 08:38:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.rDkBPB2nZm 00:39:23.688 08:38:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.rDkBPB2nZm 00:39:23.688 08:38:20 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.rDkBPB2nZm 00:39:23.688 08:38:20 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rDkBPB2nZm 00:39:23.688 08:38:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rDkBPB2nZm 00:39:23.950 08:38:21 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:23.950 08:38:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:24.211 nvme0n1 00:39:24.211 08:38:21 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:39:24.211 08:38:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:24.211 08:38:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:24.211 08:38:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:24.211 08:38:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:24.211 08:38:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:24.474 08:38:21 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:39:24.474 08:38:21 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:39:24.474 08:38:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:24.474 08:38:21 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:39:24.474 08:38:21 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:39:24.474 08:38:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:24.474 08:38:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:24.474 08:38:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:24.736 08:38:21 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:39:24.736 08:38:21 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:39:24.736 08:38:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:24.736 08:38:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:24.736 08:38:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:24.736 08:38:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:24.736 08:38:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:24.997 08:38:22 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:39:24.997 08:38:22 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:24.997 08:38:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:24.997 08:38:22 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:39:24.997 08:38:22 keyring_file -- keyring/file.sh@105 -- # jq length 00:39:24.997 08:38:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:25.259 08:38:22 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:39:25.259 08:38:22 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rDkBPB2nZm 00:39:25.259 08:38:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rDkBPB2nZm 00:39:25.520 08:38:22 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.lApxnzcFWQ 00:39:25.520 08:38:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.lApxnzcFWQ 00:39:25.520 08:38:22 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:25.520 08:38:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:25.781 nvme0n1 00:39:25.781 08:38:22 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:39:25.781 08:38:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:39:26.042 08:38:23 keyring_file -- keyring/file.sh@113 -- # config='{ 00:39:26.042 "subsystems": [ 00:39:26.042 { 00:39:26.042 "subsystem": "keyring", 00:39:26.042 "config": [ 00:39:26.042 { 00:39:26.042 "method": "keyring_file_add_key", 00:39:26.042 "params": { 00:39:26.042 "name": "key0", 00:39:26.042 "path": "/tmp/tmp.rDkBPB2nZm" 00:39:26.042 } 00:39:26.042 }, 00:39:26.042 { 00:39:26.042 "method": "keyring_file_add_key", 00:39:26.042 "params": { 00:39:26.042 "name": "key1", 00:39:26.042 "path": "/tmp/tmp.lApxnzcFWQ" 00:39:26.042 } 00:39:26.042 } 00:39:26.042 ] 00:39:26.042 }, 00:39:26.042 { 00:39:26.042 "subsystem": "iobuf", 00:39:26.042 "config": [ 00:39:26.042 { 00:39:26.042 "method": "iobuf_set_options", 00:39:26.043 "params": { 00:39:26.043 "small_pool_count": 8192, 00:39:26.043 "large_pool_count": 1024, 00:39:26.043 "small_bufsize": 8192, 00:39:26.043 "large_bufsize": 135168, 00:39:26.043 "enable_numa": false 00:39:26.043 } 00:39:26.043 } 00:39:26.043 ] 00:39:26.043 }, 00:39:26.043 { 00:39:26.043 "subsystem": "sock", 00:39:26.043 "config": [ 00:39:26.043 { 00:39:26.043 "method": "sock_set_default_impl", 00:39:26.043 "params": { 00:39:26.043 "impl_name": "posix" 00:39:26.043 } 00:39:26.043 }, 00:39:26.043 { 00:39:26.043 "method": "sock_impl_set_options", 00:39:26.043 "params": { 00:39:26.043 "impl_name": "ssl", 00:39:26.043 "recv_buf_size": 4096, 00:39:26.043 "send_buf_size": 4096, 00:39:26.043 "enable_recv_pipe": true, 00:39:26.043 "enable_quickack": false, 00:39:26.043 "enable_placement_id": 0, 00:39:26.043 "enable_zerocopy_send_server": true, 00:39:26.043 "enable_zerocopy_send_client": false, 00:39:26.043 "zerocopy_threshold": 0, 00:39:26.043 "tls_version": 0, 00:39:26.043 "enable_ktls": false 00:39:26.043 } 00:39:26.043 }, 00:39:26.043 { 00:39:26.043 "method": "sock_impl_set_options", 00:39:26.043 "params": { 00:39:26.043 "impl_name": "posix", 00:39:26.043 "recv_buf_size": 2097152, 00:39:26.043 "send_buf_size": 2097152, 00:39:26.043 "enable_recv_pipe": true, 00:39:26.043 "enable_quickack": false, 00:39:26.043 "enable_placement_id": 0, 00:39:26.043 "enable_zerocopy_send_server": true, 00:39:26.043 "enable_zerocopy_send_client": false, 00:39:26.043 "zerocopy_threshold": 0, 00:39:26.043 "tls_version": 0, 00:39:26.043 "enable_ktls": false 00:39:26.043 } 00:39:26.043 } 00:39:26.043 ] 00:39:26.043 }, 00:39:26.043 { 00:39:26.043 "subsystem": "vmd", 00:39:26.043 "config": [] 00:39:26.043 }, 00:39:26.043 { 00:39:26.043 "subsystem": "accel", 00:39:26.043 "config": [ 00:39:26.043 { 00:39:26.043 "method": "accel_set_options", 00:39:26.043 "params": { 00:39:26.043 "small_cache_size": 128, 00:39:26.043 "large_cache_size": 16, 00:39:26.043 "task_count": 2048, 00:39:26.043 "sequence_count": 2048, 00:39:26.043 "buf_count": 2048 00:39:26.043 } 00:39:26.043 } 00:39:26.043 ] 00:39:26.043 }, 00:39:26.043 { 00:39:26.043 "subsystem": "bdev", 00:39:26.043 "config": [ 00:39:26.043 { 00:39:26.043 "method": "bdev_set_options", 00:39:26.043 "params": { 00:39:26.043 "bdev_io_pool_size": 65535, 00:39:26.043 "bdev_io_cache_size": 256, 00:39:26.043 "bdev_auto_examine": true, 00:39:26.043 "iobuf_small_cache_size": 128, 00:39:26.043 "iobuf_large_cache_size": 16 00:39:26.043 } 00:39:26.043 }, 00:39:26.043 { 00:39:26.043 "method": "bdev_raid_set_options", 00:39:26.043 "params": { 00:39:26.043 "process_window_size_kb": 1024, 00:39:26.043 "process_max_bandwidth_mb_sec": 0 00:39:26.043 } 00:39:26.043 }, 00:39:26.043 { 00:39:26.043 "method": "bdev_iscsi_set_options", 00:39:26.043 "params": { 00:39:26.043 "timeout_sec": 30 00:39:26.043 } 00:39:26.043 }, 00:39:26.043 { 00:39:26.043 "method": "bdev_nvme_set_options", 00:39:26.043 "params": { 00:39:26.043 "action_on_timeout": "none", 00:39:26.043 "timeout_us": 0, 00:39:26.043 "timeout_admin_us": 0, 00:39:26.043 "keep_alive_timeout_ms": 10000, 00:39:26.043 "arbitration_burst": 0, 00:39:26.043 "low_priority_weight": 0, 00:39:26.043 "medium_priority_weight": 0, 00:39:26.043 "high_priority_weight": 0, 00:39:26.043 "nvme_adminq_poll_period_us": 10000, 00:39:26.043 "nvme_ioq_poll_period_us": 0, 00:39:26.043 "io_queue_requests": 512, 00:39:26.043 "delay_cmd_submit": true, 00:39:26.043 "transport_retry_count": 4, 00:39:26.043 "bdev_retry_count": 3, 00:39:26.043 "transport_ack_timeout": 0, 00:39:26.043 "ctrlr_loss_timeout_sec": 0, 00:39:26.043 "reconnect_delay_sec": 0, 00:39:26.043 "fast_io_fail_timeout_sec": 0, 00:39:26.043 "disable_auto_failback": false, 00:39:26.043 "generate_uuids": false, 00:39:26.043 "transport_tos": 0, 00:39:26.043 "nvme_error_stat": false, 00:39:26.043 "rdma_srq_size": 0, 00:39:26.043 "io_path_stat": false, 00:39:26.043 "allow_accel_sequence": false, 00:39:26.043 "rdma_max_cq_size": 0, 00:39:26.043 "rdma_cm_event_timeout_ms": 0, 00:39:26.043 "dhchap_digests": [ 00:39:26.043 "sha256", 00:39:26.043 "sha384", 00:39:26.043 "sha512" 00:39:26.043 ], 00:39:26.043 "dhchap_dhgroups": [ 00:39:26.043 "null", 00:39:26.043 "ffdhe2048", 00:39:26.043 "ffdhe3072", 00:39:26.043 "ffdhe4096", 00:39:26.043 "ffdhe6144", 00:39:26.043 "ffdhe8192" 00:39:26.043 ] 00:39:26.043 } 00:39:26.043 }, 00:39:26.043 { 00:39:26.043 "method": "bdev_nvme_attach_controller", 00:39:26.043 "params": { 00:39:26.043 "name": "nvme0", 00:39:26.043 "trtype": "TCP", 00:39:26.043 "adrfam": "IPv4", 00:39:26.043 "traddr": "127.0.0.1", 00:39:26.043 "trsvcid": "4420", 00:39:26.043 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:26.043 "prchk_reftag": false, 00:39:26.043 "prchk_guard": false, 00:39:26.043 "ctrlr_loss_timeout_sec": 0, 00:39:26.043 "reconnect_delay_sec": 0, 00:39:26.043 "fast_io_fail_timeout_sec": 0, 00:39:26.043 "psk": "key0", 00:39:26.043 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:26.043 "hdgst": false, 00:39:26.043 "ddgst": false, 00:39:26.043 "multipath": "multipath" 00:39:26.043 } 00:39:26.043 }, 00:39:26.043 { 00:39:26.043 "method": "bdev_nvme_set_hotplug", 00:39:26.043 "params": { 00:39:26.043 "period_us": 100000, 00:39:26.043 "enable": false 00:39:26.043 } 00:39:26.043 }, 00:39:26.043 { 00:39:26.043 "method": "bdev_wait_for_examine" 00:39:26.043 } 00:39:26.043 ] 00:39:26.043 }, 00:39:26.043 { 00:39:26.043 "subsystem": "nbd", 00:39:26.043 "config": [] 00:39:26.043 } 00:39:26.043 ] 00:39:26.043 }' 00:39:26.043 08:38:23 keyring_file -- keyring/file.sh@115 -- # killprocess 2304416 00:39:26.043 08:38:23 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2304416 ']' 00:39:26.043 08:38:23 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2304416 00:39:26.043 08:38:23 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:26.043 08:38:23 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:26.043 08:38:23 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2304416 00:39:26.043 08:38:23 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:26.043 08:38:23 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:26.043 08:38:23 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2304416' 00:39:26.043 killing process with pid 2304416 00:39:26.043 08:38:23 keyring_file -- common/autotest_common.sh@973 -- # kill 2304416 00:39:26.043 Received shutdown signal, test time was about 1.000000 seconds 00:39:26.043 00:39:26.043 Latency(us) 00:39:26.043 [2024-11-28T07:38:23.332Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:26.043 [2024-11-28T07:38:23.332Z] =================================================================================================================== 00:39:26.043 [2024-11-28T07:38:23.332Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:26.043 08:38:23 keyring_file -- common/autotest_common.sh@978 -- # wait 2304416 00:39:26.304 08:38:23 keyring_file -- keyring/file.sh@118 -- # bperfpid=2306226 00:39:26.304 08:38:23 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2306226 /var/tmp/bperf.sock 00:39:26.304 08:38:23 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2306226 ']' 00:39:26.304 08:38:23 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:26.304 08:38:23 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:39:26.304 08:38:23 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:26.304 08:38:23 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:26.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:26.304 08:38:23 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:26.304 08:38:23 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:39:26.304 "subsystems": [ 00:39:26.304 { 00:39:26.304 "subsystem": "keyring", 00:39:26.304 "config": [ 00:39:26.304 { 00:39:26.304 "method": "keyring_file_add_key", 00:39:26.304 "params": { 00:39:26.304 "name": "key0", 00:39:26.304 "path": "/tmp/tmp.rDkBPB2nZm" 00:39:26.304 } 00:39:26.304 }, 00:39:26.304 { 00:39:26.304 "method": "keyring_file_add_key", 00:39:26.304 "params": { 00:39:26.304 "name": "key1", 00:39:26.304 "path": "/tmp/tmp.lApxnzcFWQ" 00:39:26.304 } 00:39:26.304 } 00:39:26.304 ] 00:39:26.304 }, 00:39:26.304 { 00:39:26.304 "subsystem": "iobuf", 00:39:26.304 "config": [ 00:39:26.304 { 00:39:26.304 "method": "iobuf_set_options", 00:39:26.304 "params": { 00:39:26.304 "small_pool_count": 8192, 00:39:26.304 "large_pool_count": 1024, 00:39:26.304 "small_bufsize": 8192, 00:39:26.304 "large_bufsize": 135168, 00:39:26.304 "enable_numa": false 00:39:26.304 } 00:39:26.304 } 00:39:26.304 ] 00:39:26.304 }, 00:39:26.304 { 00:39:26.304 "subsystem": "sock", 00:39:26.304 "config": [ 00:39:26.304 { 00:39:26.304 "method": "sock_set_default_impl", 00:39:26.304 "params": { 00:39:26.304 "impl_name": "posix" 00:39:26.304 } 00:39:26.304 }, 00:39:26.304 { 00:39:26.304 "method": "sock_impl_set_options", 00:39:26.304 "params": { 00:39:26.304 "impl_name": "ssl", 00:39:26.304 "recv_buf_size": 4096, 00:39:26.304 "send_buf_size": 4096, 00:39:26.304 "enable_recv_pipe": true, 00:39:26.304 "enable_quickack": false, 00:39:26.304 "enable_placement_id": 0, 00:39:26.304 "enable_zerocopy_send_server": true, 00:39:26.304 "enable_zerocopy_send_client": false, 00:39:26.304 "zerocopy_threshold": 0, 00:39:26.304 "tls_version": 0, 00:39:26.304 "enable_ktls": false 00:39:26.304 } 00:39:26.304 }, 00:39:26.304 { 00:39:26.304 "method": "sock_impl_set_options", 00:39:26.304 "params": { 00:39:26.304 "impl_name": "posix", 00:39:26.304 "recv_buf_size": 2097152, 00:39:26.304 "send_buf_size": 2097152, 00:39:26.304 "enable_recv_pipe": true, 00:39:26.304 "enable_quickack": false, 00:39:26.304 "enable_placement_id": 0, 00:39:26.304 "enable_zerocopy_send_server": true, 00:39:26.304 "enable_zerocopy_send_client": false, 00:39:26.304 "zerocopy_threshold": 0, 00:39:26.304 "tls_version": 0, 00:39:26.304 "enable_ktls": false 00:39:26.304 } 00:39:26.304 } 00:39:26.304 ] 00:39:26.304 }, 00:39:26.304 { 00:39:26.304 "subsystem": "vmd", 00:39:26.304 "config": [] 00:39:26.304 }, 00:39:26.304 { 00:39:26.304 "subsystem": "accel", 00:39:26.304 "config": [ 00:39:26.304 { 00:39:26.304 "method": "accel_set_options", 00:39:26.304 "params": { 00:39:26.304 "small_cache_size": 128, 00:39:26.304 "large_cache_size": 16, 00:39:26.304 "task_count": 2048, 00:39:26.304 "sequence_count": 2048, 00:39:26.304 "buf_count": 2048 00:39:26.304 } 00:39:26.304 } 00:39:26.304 ] 00:39:26.304 }, 00:39:26.304 { 00:39:26.304 "subsystem": "bdev", 00:39:26.304 "config": [ 00:39:26.304 { 00:39:26.304 "method": "bdev_set_options", 00:39:26.304 "params": { 00:39:26.304 "bdev_io_pool_size": 65535, 00:39:26.304 "bdev_io_cache_size": 256, 00:39:26.304 "bdev_auto_examine": true, 00:39:26.304 "iobuf_small_cache_size": 128, 00:39:26.304 "iobuf_large_cache_size": 16 00:39:26.304 } 00:39:26.304 }, 00:39:26.304 { 00:39:26.304 "method": "bdev_raid_set_options", 00:39:26.304 "params": { 00:39:26.304 "process_window_size_kb": 1024, 00:39:26.304 "process_max_bandwidth_mb_sec": 0 00:39:26.304 } 00:39:26.304 }, 00:39:26.304 { 00:39:26.305 "method": "bdev_iscsi_set_options", 00:39:26.305 "params": { 00:39:26.305 "timeout_sec": 30 00:39:26.305 } 00:39:26.305 }, 00:39:26.305 { 00:39:26.305 "method": "bdev_nvme_set_options", 00:39:26.305 "params": { 00:39:26.305 "action_on_timeout": "none", 00:39:26.305 "timeout_us": 0, 00:39:26.305 "timeout_admin_us": 0, 00:39:26.305 "keep_alive_timeout_ms": 10000, 00:39:26.305 "arbitration_burst": 0, 00:39:26.305 "low_priority_weight": 0, 00:39:26.305 "medium_priority_weight": 0, 00:39:26.305 "high_priority_weight": 0, 00:39:26.305 "nvme_adminq_poll_period_us": 10000, 00:39:26.305 "nvme_ioq_poll_period_us": 0, 00:39:26.305 "io_queue_requests": 512, 00:39:26.305 "delay_cmd_submit": true, 00:39:26.305 "transport_retry_count": 4, 00:39:26.305 "bdev_retry_count": 3, 00:39:26.305 "transport_ack_timeout": 0, 00:39:26.305 "ctrlr_loss_timeout_sec": 0, 00:39:26.305 "reconnect_delay_sec": 0, 00:39:26.305 "fast_io_fail_timeout_sec": 0, 00:39:26.305 "disable_auto_failback": false, 00:39:26.305 "generate_uuids": false, 00:39:26.305 "transport_tos": 0, 00:39:26.305 "nvme_error_stat": false, 00:39:26.305 "rdma_srq_size": 0, 00:39:26.305 "io_path_stat": false, 00:39:26.305 "allow_accel_sequence": false, 00:39:26.305 "rdma_max_cq_size": 0, 00:39:26.305 "rdma_cm_event_timeout_ms": 0, 00:39:26.305 "dhchap_digests": [ 00:39:26.305 "sha256", 00:39:26.305 "sha384", 00:39:26.305 "sha512" 00:39:26.305 ], 00:39:26.305 "dhchap_dhgroups": [ 00:39:26.305 "null", 00:39:26.305 "ffdhe2048", 00:39:26.305 "ffdhe3072", 00:39:26.305 "ffdhe4096", 00:39:26.305 "ffdhe6144", 00:39:26.305 "ffdhe8192" 00:39:26.305 ] 00:39:26.305 } 00:39:26.305 }, 00:39:26.305 { 00:39:26.305 "method": "bdev_nvme_attach_controller", 00:39:26.305 "params": { 00:39:26.305 "name": "nvme0", 00:39:26.305 "trtype": "TCP", 00:39:26.305 "adrfam": "IPv4", 00:39:26.305 "traddr": "127.0.0.1", 00:39:26.305 "trsvcid": "4420", 00:39:26.305 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:26.305 "prchk_reftag": false, 00:39:26.305 "prchk_guard": false, 00:39:26.305 "ctrlr_loss_timeout_sec": 0, 00:39:26.305 "reconnect_delay_sec": 0, 00:39:26.305 "fast_io_fail_timeout_sec": 0, 00:39:26.305 "psk": "key0", 00:39:26.305 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:26.305 "hdgst": false, 00:39:26.305 "ddgst": false, 00:39:26.305 "multipath": "multipath" 00:39:26.305 } 00:39:26.305 }, 00:39:26.305 { 00:39:26.305 "method": "bdev_nvme_set_hotplug", 00:39:26.305 "params": { 00:39:26.305 "period_us": 100000, 00:39:26.305 "enable": false 00:39:26.305 } 00:39:26.305 }, 00:39:26.305 { 00:39:26.305 "method": "bdev_wait_for_examine" 00:39:26.305 } 00:39:26.305 ] 00:39:26.305 }, 00:39:26.305 { 00:39:26.305 "subsystem": "nbd", 00:39:26.305 "config": [] 00:39:26.305 } 00:39:26.305 ] 00:39:26.305 }' 00:39:26.305 08:38:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:26.305 [2024-11-28 08:38:23.412262] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:39:26.305 [2024-11-28 08:38:23.412315] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2306226 ] 00:39:26.305 [2024-11-28 08:38:23.496754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:26.305 [2024-11-28 08:38:23.524947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:26.565 [2024-11-28 08:38:23.669250] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:27.136 08:38:24 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:27.136 08:38:24 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:27.136 08:38:24 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:39:27.136 08:38:24 keyring_file -- keyring/file.sh@121 -- # jq length 00:39:27.136 08:38:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:27.136 08:38:24 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:39:27.136 08:38:24 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:39:27.136 08:38:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:27.136 08:38:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:27.136 08:38:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:27.136 08:38:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:27.136 08:38:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:27.397 08:38:24 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:39:27.397 08:38:24 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:39:27.397 08:38:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:27.397 08:38:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:27.397 08:38:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:27.397 08:38:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:27.397 08:38:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:27.658 08:38:24 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:39:27.659 08:38:24 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:39:27.659 08:38:24 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:39:27.659 08:38:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:39:27.659 08:38:24 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:39:27.659 08:38:24 keyring_file -- keyring/file.sh@1 -- # cleanup 00:39:27.659 08:38:24 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.rDkBPB2nZm /tmp/tmp.lApxnzcFWQ 00:39:27.659 08:38:24 keyring_file -- keyring/file.sh@20 -- # killprocess 2306226 00:39:27.659 08:38:24 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2306226 ']' 00:39:27.659 08:38:24 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2306226 00:39:27.659 08:38:24 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:27.659 08:38:24 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:27.920 08:38:24 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2306226 00:39:27.920 08:38:25 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:27.920 08:38:25 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:27.920 08:38:25 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2306226' 00:39:27.920 killing process with pid 2306226 00:39:27.920 08:38:25 keyring_file -- common/autotest_common.sh@973 -- # kill 2306226 00:39:27.920 Received shutdown signal, test time was about 1.000000 seconds 00:39:27.920 00:39:27.920 Latency(us) 00:39:27.920 [2024-11-28T07:38:25.209Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:27.920 [2024-11-28T07:38:25.209Z] =================================================================================================================== 00:39:27.920 [2024-11-28T07:38:25.209Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:39:27.920 08:38:25 keyring_file -- common/autotest_common.sh@978 -- # wait 2306226 00:39:27.920 08:38:25 keyring_file -- keyring/file.sh@21 -- # killprocess 2304339 00:39:27.920 08:38:25 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2304339 ']' 00:39:27.920 08:38:25 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2304339 00:39:27.920 08:38:25 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:27.921 08:38:25 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:27.921 08:38:25 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2304339 00:39:27.921 08:38:25 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:27.921 08:38:25 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:27.921 08:38:25 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2304339' 00:39:27.921 killing process with pid 2304339 00:39:27.921 08:38:25 keyring_file -- common/autotest_common.sh@973 -- # kill 2304339 00:39:27.921 08:38:25 keyring_file -- common/autotest_common.sh@978 -- # wait 2304339 00:39:28.181 00:39:28.181 real 0m12.160s 00:39:28.181 user 0m29.312s 00:39:28.181 sys 0m2.802s 00:39:28.181 08:38:25 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:28.181 08:38:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:28.181 ************************************ 00:39:28.181 END TEST keyring_file 00:39:28.181 ************************************ 00:39:28.181 08:38:25 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:39:28.181 08:38:25 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:28.181 08:38:25 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:28.181 08:38:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:28.181 08:38:25 -- common/autotest_common.sh@10 -- # set +x 00:39:28.181 ************************************ 00:39:28.181 START TEST keyring_linux 00:39:28.181 ************************************ 00:39:28.181 08:38:25 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:28.181 Joined session keyring: 169749956 00:39:28.443 * Looking for test storage... 00:39:28.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:28.443 08:38:25 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:28.443 08:38:25 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:39:28.443 08:38:25 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:28.443 08:38:25 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:28.443 08:38:25 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:28.443 08:38:25 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:28.443 08:38:25 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:28.443 08:38:25 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:39:28.443 08:38:25 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:39:28.443 08:38:25 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:39:28.443 08:38:25 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:39:28.443 08:38:25 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:39:28.443 08:38:25 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:39:28.443 08:38:25 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:39:28.443 08:38:25 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:28.443 08:38:25 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:39:28.443 08:38:25 keyring_linux -- scripts/common.sh@345 -- # : 1 00:39:28.443 08:38:25 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:28.443 08:38:25 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:28.443 08:38:25 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:39:28.443 08:38:25 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:39:28.443 08:38:25 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:28.443 08:38:25 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:39:28.443 08:38:25 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:39:28.443 08:38:25 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:39:28.443 08:38:25 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:39:28.443 08:38:25 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:28.443 08:38:25 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:39:28.443 08:38:25 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:39:28.443 08:38:25 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:28.443 08:38:25 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:28.443 08:38:25 keyring_linux -- scripts/common.sh@368 -- # return 0 00:39:28.443 08:38:25 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:28.443 08:38:25 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:28.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:28.443 --rc genhtml_branch_coverage=1 00:39:28.443 --rc genhtml_function_coverage=1 00:39:28.443 --rc genhtml_legend=1 00:39:28.443 --rc geninfo_all_blocks=1 00:39:28.443 --rc geninfo_unexecuted_blocks=1 00:39:28.443 00:39:28.443 ' 00:39:28.443 08:38:25 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:28.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:28.443 --rc genhtml_branch_coverage=1 00:39:28.443 --rc genhtml_function_coverage=1 00:39:28.443 --rc genhtml_legend=1 00:39:28.443 --rc geninfo_all_blocks=1 00:39:28.443 --rc geninfo_unexecuted_blocks=1 00:39:28.443 00:39:28.443 ' 00:39:28.443 08:38:25 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:28.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:28.443 --rc genhtml_branch_coverage=1 00:39:28.443 --rc genhtml_function_coverage=1 00:39:28.443 --rc genhtml_legend=1 00:39:28.443 --rc geninfo_all_blocks=1 00:39:28.443 --rc geninfo_unexecuted_blocks=1 00:39:28.443 00:39:28.443 ' 00:39:28.443 08:38:25 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:28.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:28.443 --rc genhtml_branch_coverage=1 00:39:28.443 --rc genhtml_function_coverage=1 00:39:28.443 --rc genhtml_legend=1 00:39:28.443 --rc geninfo_all_blocks=1 00:39:28.443 --rc geninfo_unexecuted_blocks=1 00:39:28.443 00:39:28.443 ' 00:39:28.443 08:38:25 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:28.443 08:38:25 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:28.443 08:38:25 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:39:28.443 08:38:25 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:28.443 08:38:25 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:28.443 08:38:25 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:28.443 08:38:25 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:28.443 08:38:25 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:28.443 08:38:25 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:28.443 08:38:25 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:28.443 08:38:25 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:28.443 08:38:25 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:28.443 08:38:25 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:28.443 08:38:25 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:28.443 08:38:25 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:28.443 08:38:25 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:28.443 08:38:25 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:28.443 08:38:25 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:28.443 08:38:25 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:28.443 08:38:25 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:28.443 08:38:25 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:39:28.443 08:38:25 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:28.443 08:38:25 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:28.443 08:38:25 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:28.443 08:38:25 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:28.443 08:38:25 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:28.443 08:38:25 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:28.443 08:38:25 keyring_linux -- paths/export.sh@5 -- # export PATH 00:39:28.443 08:38:25 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:28.443 08:38:25 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:39:28.443 08:38:25 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:28.443 08:38:25 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:28.443 08:38:25 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:28.443 08:38:25 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:28.443 08:38:25 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:28.443 08:38:25 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:28.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:28.443 08:38:25 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:28.443 08:38:25 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:28.443 08:38:25 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:28.443 08:38:25 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:28.443 08:38:25 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:28.443 08:38:25 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:28.443 08:38:25 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:39:28.443 08:38:25 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:39:28.443 08:38:25 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:39:28.443 08:38:25 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:39:28.443 08:38:25 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:28.443 08:38:25 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:39:28.443 08:38:25 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:28.443 08:38:25 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:28.443 08:38:25 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:39:28.443 08:38:25 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:28.443 08:38:25 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:28.443 08:38:25 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:39:28.443 08:38:25 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:28.443 08:38:25 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:28.444 08:38:25 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:39:28.444 08:38:25 keyring_linux -- nvmf/common.sh@733 -- # python - 00:39:28.704 08:38:25 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:39:28.704 08:38:25 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:39:28.704 /tmp/:spdk-test:key0 00:39:28.704 08:38:25 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:39:28.704 08:38:25 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:28.704 08:38:25 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:39:28.704 08:38:25 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:28.704 08:38:25 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:28.704 08:38:25 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:39:28.704 08:38:25 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:28.704 08:38:25 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:28.704 08:38:25 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:39:28.704 08:38:25 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:28.704 08:38:25 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:39:28.704 08:38:25 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:39:28.704 08:38:25 keyring_linux -- nvmf/common.sh@733 -- # python - 00:39:28.704 08:38:25 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:39:28.704 08:38:25 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:39:28.704 /tmp/:spdk-test:key1 00:39:28.704 08:38:25 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2306681 00:39:28.704 08:38:25 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2306681 00:39:28.704 08:38:25 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:28.704 08:38:25 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2306681 ']' 00:39:28.704 08:38:25 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:28.704 08:38:25 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:28.705 08:38:25 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:28.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:28.705 08:38:25 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:28.705 08:38:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:28.705 [2024-11-28 08:38:25.843561] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:39:28.705 [2024-11-28 08:38:25.843632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2306681 ] 00:39:28.705 [2024-11-28 08:38:25.931295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:28.705 [2024-11-28 08:38:25.966361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:29.645 08:38:26 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:29.646 08:38:26 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:39:29.646 08:38:26 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:39:29.646 08:38:26 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.646 08:38:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:29.646 [2024-11-28 08:38:26.618247] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:29.646 null0 00:39:29.646 [2024-11-28 08:38:26.650302] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:29.646 [2024-11-28 08:38:26.650662] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:29.646 08:38:26 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.646 08:38:26 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:39:29.646 249255761 00:39:29.646 08:38:26 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:39:29.646 543519287 00:39:29.646 08:38:26 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2306995 00:39:29.646 08:38:26 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2306995 /var/tmp/bperf.sock 00:39:29.646 08:38:26 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:39:29.646 08:38:26 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2306995 ']' 00:39:29.646 08:38:26 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:29.646 08:38:26 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:29.646 08:38:26 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:29.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:29.646 08:38:26 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:29.646 08:38:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:29.646 [2024-11-28 08:38:26.735081] Starting SPDK v25.01-pre git sha1 37db29af3 / DPDK 24.03.0 initialization... 00:39:29.646 [2024-11-28 08:38:26.735132] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2306995 ] 00:39:29.646 [2024-11-28 08:38:26.817111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:29.646 [2024-11-28 08:38:26.846850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:30.589 08:38:27 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:30.589 08:38:27 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:39:30.589 08:38:27 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:39:30.589 08:38:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:39:30.589 08:38:27 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:39:30.589 08:38:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:39:30.850 08:38:27 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:30.850 08:38:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:30.850 [2024-11-28 08:38:28.080397] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:31.110 nvme0n1 00:39:31.110 08:38:28 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:39:31.110 08:38:28 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:39:31.110 08:38:28 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:31.110 08:38:28 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:31.110 08:38:28 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:31.110 08:38:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:31.110 08:38:28 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:39:31.110 08:38:28 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:31.110 08:38:28 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:39:31.110 08:38:28 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:39:31.110 08:38:28 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:31.110 08:38:28 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:39:31.110 08:38:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:31.372 08:38:28 keyring_linux -- keyring/linux.sh@25 -- # sn=249255761 00:39:31.372 08:38:28 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:39:31.372 08:38:28 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:31.372 08:38:28 keyring_linux -- keyring/linux.sh@26 -- # [[ 249255761 == \2\4\9\2\5\5\7\6\1 ]] 00:39:31.372 08:38:28 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 249255761 00:39:31.372 08:38:28 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:39:31.372 08:38:28 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:31.372 Running I/O for 1 seconds... 00:39:32.761 24209.00 IOPS, 94.57 MiB/s 00:39:32.761 Latency(us) 00:39:32.761 [2024-11-28T07:38:30.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:32.761 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:39:32.761 nvme0n1 : 1.01 24208.36 94.56 0.00 0.00 5271.68 4450.99 10103.47 00:39:32.761 [2024-11-28T07:38:30.050Z] =================================================================================================================== 00:39:32.761 [2024-11-28T07:38:30.050Z] Total : 24208.36 94.56 0.00 0.00 5271.68 4450.99 10103.47 00:39:32.761 { 00:39:32.761 "results": [ 00:39:32.761 { 00:39:32.761 "job": "nvme0n1", 00:39:32.761 "core_mask": "0x2", 00:39:32.761 "workload": "randread", 00:39:32.761 "status": "finished", 00:39:32.761 "queue_depth": 128, 00:39:32.761 "io_size": 4096, 00:39:32.761 "runtime": 1.005314, 00:39:32.761 "iops": 24208.356792007275, 00:39:32.761 "mibps": 94.56389371877842, 00:39:32.761 "io_failed": 0, 00:39:32.761 "io_timeout": 0, 00:39:32.761 "avg_latency_us": 5271.6761518127405, 00:39:32.761 "min_latency_us": 4450.986666666667, 00:39:32.761 "max_latency_us": 10103.466666666667 00:39:32.761 } 00:39:32.761 ], 00:39:32.761 "core_count": 1 00:39:32.761 } 00:39:32.761 08:38:29 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:32.761 08:38:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:32.761 08:38:29 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:39:32.761 08:38:29 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:39:32.761 08:38:29 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:32.761 08:38:29 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:32.761 08:38:29 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:32.761 08:38:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:32.761 08:38:29 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:39:32.761 08:38:30 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:32.761 08:38:30 keyring_linux -- keyring/linux.sh@23 -- # return 00:39:32.761 08:38:30 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:32.761 08:38:30 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:39:32.761 08:38:30 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:32.761 08:38:30 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:32.761 08:38:30 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:32.761 08:38:30 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:32.761 08:38:30 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:32.761 08:38:30 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:32.761 08:38:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:33.023 [2024-11-28 08:38:30.169741] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:33.023 [2024-11-28 08:38:30.170316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d99e0 (107): Transport endpoint is not connected 00:39:33.023 [2024-11-28 08:38:30.171312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d99e0 (9): Bad file descriptor 00:39:33.023 [2024-11-28 08:38:30.172314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:39:33.023 [2024-11-28 08:38:30.172327] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:33.023 [2024-11-28 08:38:30.172332] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:39:33.023 [2024-11-28 08:38:30.172339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:39:33.023 request: 00:39:33.023 { 00:39:33.023 "name": "nvme0", 00:39:33.023 "trtype": "tcp", 00:39:33.023 "traddr": "127.0.0.1", 00:39:33.023 "adrfam": "ipv4", 00:39:33.023 "trsvcid": "4420", 00:39:33.023 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:33.023 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:33.023 "prchk_reftag": false, 00:39:33.023 "prchk_guard": false, 00:39:33.023 "hdgst": false, 00:39:33.023 "ddgst": false, 00:39:33.023 "psk": ":spdk-test:key1", 00:39:33.023 "allow_unrecognized_csi": false, 00:39:33.023 "method": "bdev_nvme_attach_controller", 00:39:33.023 "req_id": 1 00:39:33.023 } 00:39:33.023 Got JSON-RPC error response 00:39:33.023 response: 00:39:33.023 { 00:39:33.023 "code": -5, 00:39:33.023 "message": "Input/output error" 00:39:33.023 } 00:39:33.023 08:38:30 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:39:33.023 08:38:30 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:33.023 08:38:30 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:33.023 08:38:30 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:33.023 08:38:30 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:39:33.023 08:38:30 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:33.023 08:38:30 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:39:33.023 08:38:30 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:39:33.023 08:38:30 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:39:33.023 08:38:30 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:33.023 08:38:30 keyring_linux -- keyring/linux.sh@33 -- # sn=249255761 00:39:33.023 08:38:30 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 249255761 00:39:33.023 1 links removed 00:39:33.023 08:38:30 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:33.023 08:38:30 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:39:33.023 08:38:30 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:39:33.023 08:38:30 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:39:33.023 08:38:30 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:39:33.023 08:38:30 keyring_linux -- keyring/linux.sh@33 -- # sn=543519287 00:39:33.023 08:38:30 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 543519287 00:39:33.023 1 links removed 00:39:33.023 08:38:30 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2306995 00:39:33.023 08:38:30 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2306995 ']' 00:39:33.023 08:38:30 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2306995 00:39:33.023 08:38:30 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:39:33.023 08:38:30 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:33.023 08:38:30 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2306995 00:39:33.023 08:38:30 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:33.023 08:38:30 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:33.023 08:38:30 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2306995' 00:39:33.023 killing process with pid 2306995 00:39:33.023 08:38:30 keyring_linux -- common/autotest_common.sh@973 -- # kill 2306995 00:39:33.023 Received shutdown signal, test time was about 1.000000 seconds 00:39:33.023 00:39:33.023 Latency(us) 00:39:33.023 [2024-11-28T07:38:30.312Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:33.023 [2024-11-28T07:38:30.312Z] =================================================================================================================== 00:39:33.023 [2024-11-28T07:38:30.312Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:33.023 08:38:30 keyring_linux -- common/autotest_common.sh@978 -- # wait 2306995 00:39:33.284 08:38:30 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2306681 00:39:33.284 08:38:30 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2306681 ']' 00:39:33.284 08:38:30 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2306681 00:39:33.284 08:38:30 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:39:33.284 08:38:30 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:33.284 08:38:30 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2306681 00:39:33.284 08:38:30 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:33.284 08:38:30 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:33.284 08:38:30 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2306681' 00:39:33.284 killing process with pid 2306681 00:39:33.284 08:38:30 keyring_linux -- common/autotest_common.sh@973 -- # kill 2306681 00:39:33.284 08:38:30 keyring_linux -- common/autotest_common.sh@978 -- # wait 2306681 00:39:33.558 00:39:33.558 real 0m5.190s 00:39:33.558 user 0m9.707s 00:39:33.558 sys 0m1.376s 00:39:33.558 08:38:30 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:33.558 08:38:30 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:33.558 ************************************ 00:39:33.558 END TEST keyring_linux 00:39:33.558 ************************************ 00:39:33.558 08:38:30 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:39:33.558 08:38:30 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:39:33.558 08:38:30 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:39:33.558 08:38:30 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:39:33.558 08:38:30 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:39:33.558 08:38:30 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:39:33.558 08:38:30 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:39:33.558 08:38:30 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:39:33.558 08:38:30 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:39:33.558 08:38:30 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:39:33.558 08:38:30 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:39:33.558 08:38:30 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:39:33.558 08:38:30 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:39:33.558 08:38:30 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:39:33.558 08:38:30 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:39:33.558 08:38:30 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:39:33.558 08:38:30 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:39:33.558 08:38:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:33.558 08:38:30 -- common/autotest_common.sh@10 -- # set +x 00:39:33.558 08:38:30 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:39:33.558 08:38:30 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:39:33.558 08:38:30 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:39:33.558 08:38:30 -- common/autotest_common.sh@10 -- # set +x 00:39:41.701 INFO: APP EXITING 00:39:41.701 INFO: killing all VMs 00:39:41.701 INFO: killing vhost app 00:39:41.701 WARN: no vhost pid file found 00:39:41.701 INFO: EXIT DONE 00:39:45.003 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:39:45.003 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:39:45.003 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:39:45.003 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:39:45.003 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:39:45.003 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:39:45.003 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:39:45.003 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:39:45.003 0000:65:00.0 (144d a80a): Already using the nvme driver 00:39:45.003 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:39:45.003 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:39:45.003 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:39:45.003 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:39:45.003 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:39:45.003 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:39:45.003 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:39:45.003 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:39:49.209 Cleaning 00:39:49.209 Removing: /var/run/dpdk/spdk0/config 00:39:49.209 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:39:49.209 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:39:49.209 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:39:49.209 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:39:49.209 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:39:49.209 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:39:49.209 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:39:49.209 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:39:49.209 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:39:49.209 Removing: /var/run/dpdk/spdk0/hugepage_info 00:39:49.209 Removing: /var/run/dpdk/spdk1/config 00:39:49.209 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:39:49.209 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:39:49.209 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:39:49.209 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:39:49.209 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:39:49.209 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:39:49.209 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:39:49.209 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:39:49.209 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:39:49.209 Removing: /var/run/dpdk/spdk1/hugepage_info 00:39:49.209 Removing: /var/run/dpdk/spdk2/config 00:39:49.209 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:39:49.209 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:39:49.209 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:39:49.209 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:39:49.209 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:39:49.209 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:39:49.209 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:39:49.209 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:39:49.209 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:39:49.209 Removing: /var/run/dpdk/spdk2/hugepage_info 00:39:49.209 Removing: /var/run/dpdk/spdk3/config 00:39:49.209 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:39:49.209 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:39:49.209 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:39:49.209 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:39:49.209 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:39:49.209 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:39:49.209 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:39:49.209 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:39:49.209 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:39:49.209 Removing: /var/run/dpdk/spdk3/hugepage_info 00:39:49.209 Removing: /var/run/dpdk/spdk4/config 00:39:49.209 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:39:49.209 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:39:49.209 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:39:49.209 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:39:49.209 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:39:49.209 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:39:49.209 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:39:49.209 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:39:49.209 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:39:49.209 Removing: /var/run/dpdk/spdk4/hugepage_info 00:39:49.209 Removing: /dev/shm/bdev_svc_trace.1 00:39:49.209 Removing: /dev/shm/nvmf_trace.0 00:39:49.209 Removing: /dev/shm/spdk_tgt_trace.pid1729831 00:39:49.209 Removing: /var/run/dpdk/spdk0 00:39:49.209 Removing: /var/run/dpdk/spdk1 00:39:49.209 Removing: /var/run/dpdk/spdk2 00:39:49.209 Removing: /var/run/dpdk/spdk3 00:39:49.209 Removing: /var/run/dpdk/spdk4 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1728340 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1729831 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1730561 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1731719 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1731853 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1733118 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1733142 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1733596 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1734737 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1735203 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1735601 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1736003 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1736409 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1736814 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1737169 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1737362 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1737618 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1738993 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1742275 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1742633 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1743001 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1743334 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1743699 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1743734 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1744263 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1744425 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1744787 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1744817 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1745164 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1745248 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1745918 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1746042 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1746388 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1751222 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1756295 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1769045 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1769875 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1775046 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1775401 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1780783 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1787771 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1790971 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1803511 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1814537 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1817149 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1818164 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1839179 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1843946 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1899811 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1906278 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1913376 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1921280 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1921307 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1922459 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1923488 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1924993 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1925598 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1925735 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1925945 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1926200 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1926208 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1927208 00:39:49.209 Removing: /var/run/dpdk/spdk_pid1928214 00:39:49.210 Removing: /var/run/dpdk/spdk_pid1929219 00:39:49.210 Removing: /var/run/dpdk/spdk_pid1929898 00:39:49.210 Removing: /var/run/dpdk/spdk_pid1929905 00:39:49.210 Removing: /var/run/dpdk/spdk_pid1930239 00:39:49.210 Removing: /var/run/dpdk/spdk_pid1931677 00:39:49.210 Removing: /var/run/dpdk/spdk_pid1932942 00:39:49.210 Removing: /var/run/dpdk/spdk_pid1942742 00:39:49.210 Removing: /var/run/dpdk/spdk_pid1977265 00:39:49.210 Removing: /var/run/dpdk/spdk_pid1982746 00:39:49.210 Removing: /var/run/dpdk/spdk_pid1984686 00:39:49.210 Removing: /var/run/dpdk/spdk_pid1987033 00:39:49.210 Removing: /var/run/dpdk/spdk_pid1987371 00:39:49.210 Removing: /var/run/dpdk/spdk_pid1987391 00:39:49.210 Removing: /var/run/dpdk/spdk_pid1987731 00:39:49.210 Removing: /var/run/dpdk/spdk_pid1988456 00:39:49.210 Removing: /var/run/dpdk/spdk_pid1990501 00:39:49.210 Removing: /var/run/dpdk/spdk_pid1991875 00:39:49.210 Removing: /var/run/dpdk/spdk_pid1992263 00:39:49.210 Removing: /var/run/dpdk/spdk_pid1994973 00:39:49.210 Removing: /var/run/dpdk/spdk_pid1995686 00:39:49.210 Removing: /var/run/dpdk/spdk_pid1996593 00:39:49.210 Removing: /var/run/dpdk/spdk_pid2001522 00:39:49.210 Removing: /var/run/dpdk/spdk_pid2008736 00:39:49.210 Removing: /var/run/dpdk/spdk_pid2008737 00:39:49.210 Removing: /var/run/dpdk/spdk_pid2008738 00:39:49.210 Removing: /var/run/dpdk/spdk_pid2013409 00:39:49.210 Removing: /var/run/dpdk/spdk_pid2023655 00:39:49.210 Removing: /var/run/dpdk/spdk_pid2028488 00:39:49.210 Removing: /var/run/dpdk/spdk_pid2035651 00:39:49.210 Removing: /var/run/dpdk/spdk_pid2037191 00:39:49.210 Removing: /var/run/dpdk/spdk_pid2038732 00:39:49.210 Removing: /var/run/dpdk/spdk_pid2040549 00:39:49.210 Removing: /var/run/dpdk/spdk_pid2046094 00:39:49.210 Removing: /var/run/dpdk/spdk_pid2051399 00:39:49.210 Removing: /var/run/dpdk/spdk_pid2056442 00:39:49.210 Removing: /var/run/dpdk/spdk_pid2066106 00:39:49.469 Removing: /var/run/dpdk/spdk_pid2066112 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2071252 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2071528 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2071856 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2072311 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2072466 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2077907 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2078729 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2084004 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2087262 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2093894 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2100380 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2110581 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2119866 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2119921 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2142844 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2143527 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2144310 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2145160 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2146135 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2146906 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2147649 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2148338 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2153458 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2153727 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2161051 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2161200 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2168366 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2173543 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2184906 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2185590 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2190674 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2191083 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2196181 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2203059 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2206136 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2218574 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2229370 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2231310 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2232432 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2252211 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2256938 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2260124 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2267881 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2267889 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2274340 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2276710 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2279051 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2280320 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2282756 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2284282 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2294220 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2294882 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2295524 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2298323 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2298859 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2299526 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2304339 00:39:49.470 Removing: /var/run/dpdk/spdk_pid2304416 00:39:49.731 Removing: /var/run/dpdk/spdk_pid2306226 00:39:49.731 Removing: /var/run/dpdk/spdk_pid2306681 00:39:49.731 Removing: /var/run/dpdk/spdk_pid2306995 00:39:49.731 Clean 00:39:49.731 08:38:46 -- common/autotest_common.sh@1453 -- # return 0 00:39:49.731 08:38:46 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:39:49.731 08:38:46 -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:49.731 08:38:46 -- common/autotest_common.sh@10 -- # set +x 00:39:49.731 08:38:46 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:39:49.731 08:38:46 -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:49.731 08:38:46 -- common/autotest_common.sh@10 -- # set +x 00:39:49.731 08:38:46 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:49.731 08:38:46 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:39:49.731 08:38:46 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:39:49.731 08:38:46 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:39:49.731 08:38:46 -- spdk/autotest.sh@398 -- # hostname 00:39:49.731 08:38:46 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:39:49.992 geninfo: WARNING: invalid characters removed from testname! 00:40:16.583 08:39:12 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:18.502 08:39:15 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:19.884 08:39:17 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:21.871 08:39:18 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:23.286 08:39:20 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:25.199 08:39:22 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:26.583 08:39:23 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:40:26.583 08:39:23 -- spdk/autorun.sh@1 -- $ timing_finish 00:40:26.583 08:39:23 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:40:26.583 08:39:23 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:40:26.583 08:39:23 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:40:26.583 08:39:23 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:26.583 + [[ -n 1642905 ]] 00:40:26.583 + sudo kill 1642905 00:40:26.855 [Pipeline] } 00:40:26.872 [Pipeline] // stage 00:40:26.879 [Pipeline] } 00:40:26.894 [Pipeline] // timeout 00:40:26.901 [Pipeline] } 00:40:26.948 [Pipeline] // catchError 00:40:26.954 [Pipeline] } 00:40:26.969 [Pipeline] // wrap 00:40:26.977 [Pipeline] } 00:40:26.990 [Pipeline] // catchError 00:40:27.000 [Pipeline] stage 00:40:27.002 [Pipeline] { (Epilogue) 00:40:27.015 [Pipeline] catchError 00:40:27.017 [Pipeline] { 00:40:27.029 [Pipeline] echo 00:40:27.031 Cleanup processes 00:40:27.037 [Pipeline] sh 00:40:27.324 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:27.324 2320554 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:27.338 [Pipeline] sh 00:40:27.625 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:27.625 ++ grep -v 'sudo pgrep' 00:40:27.625 ++ awk '{print $1}' 00:40:27.625 + sudo kill -9 00:40:27.625 + true 00:40:27.637 [Pipeline] sh 00:40:27.926 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:40:40.212 [Pipeline] sh 00:40:40.507 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:40:40.507 Artifacts sizes are good 00:40:40.526 [Pipeline] archiveArtifacts 00:40:40.535 Archiving artifacts 00:40:40.683 [Pipeline] sh 00:40:40.971 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:40:40.986 [Pipeline] cleanWs 00:40:41.106 [WS-CLEANUP] Deleting project workspace... 00:40:41.106 [WS-CLEANUP] Deferred wipeout is used... 00:40:41.168 [WS-CLEANUP] done 00:40:41.170 [Pipeline] } 00:40:41.189 [Pipeline] // catchError 00:40:41.202 [Pipeline] sh 00:40:41.491 + logger -p user.info -t JENKINS-CI 00:40:41.503 [Pipeline] } 00:40:41.517 [Pipeline] // stage 00:40:41.522 [Pipeline] } 00:40:41.537 [Pipeline] // node 00:40:41.542 [Pipeline] End of Pipeline 00:40:41.582 Finished: SUCCESS